Not sure if my docker to windows conversion went correctly

Hi all, I decided to take the leap from docker to the windows package tonight. I followed the guide but i’m unsure if things are correct. Firstly i pointed the installation at my previous storage device. Its stored on D:/storjv3 (the original docker storage data). Now with out sounding stupid do i point the new installation at the same folder or just at the D drive. Secondly is it normal for the node dashboard to display different data to the docker install. I’ve tried pointing the install at both the D drive and the folder within it and both appear like new nodes with hardly any data. Thirdly my C drive usage is maxed out since install not sure of why. Any clues guys? just to add i’ve got about 1.8tb of data and it would be a shame to lose it

This is the tricky part. Windows GUI stores everything inside the folder while docker stores inside D:/storjv3/storage You should point Windows GUI to D:/storjv3/storage. Sadly your new files are stored outside storage. It is safe to copy them inside storage folder but make sure you stop and remove docker container before you start Windows GUI.

Your dashboard should always show matching info when your SN was using docker.

You should always have 10% overhead to avoid such situation.

PS: I think the Windows GUI should incorporate this storage folder internally so this mess wont happen. I am speaking from personal experience.

to be fair i don’t mind the dashboard being incorrect as long as i don’t lose my stored data

right my d drive seems to be in sync with my network usage. Even though i said i was not bothered about data on dashboard being correct, is it at all possible to check whether i’m successfully adding data to my my existing storage??

Yeah i’ve got no idea if it was successful. it SEEMS to be working but dashboard doesn’t display correct information. i’m lost now to be fair. There is a log file and to me it looks normal but again i don’t know if is adding to existing data.

as you can see disk is 3.6TB in total but i’ve already recieved 1.63 TB.

It looks like your node is starting from scratch and not realizing the 1.63 terabytes of files… If that makes any sense. go into your config.yaml file, and make sure you specified the right path.

On windows you can find your config.yaml file here:
C:\Program Files\Storj\Storage Node\config.yaml

In your config.yaml file, YOUR path to data should look like this:

# path to store data in
storage.path: D:\storjv3\

If he migrated from the docker, this config option must be

# path to store data in
storage.path: D:\storjv3\storage\
1 Like

@Stubbsey, please, give me an output of these commands (Powershell):

sls path "$env:ProgramFiles/Storj/Storage Node/config.yaml"

ls D:\storjv3\
ls D:\storjv3\storage

PS C:\Windows\system32> sls path “$env:ProgramFiles/Storj/Storage Node/config.yaml”

C:\Program Files\Storj\Storage Node\config.yaml:13:# path to static resources
C:\Program Files\Storj\Storage Node\config.yaml:34:# If set, a path to write a process
trace SVG to
C:\Program Files\Storj\Storage Node\config.yaml:55:# path to the certificate chain for
this identity
C:\Program Files\Storj\Storage Node\config.yaml:56:identity.cert-path:
C:\Users\stuart\AppData\Roaming\Storj\Identity\storagenode/identity.cert
C:\Program Files\Storj\Storage Node\config.yaml:58:# path to the private key for this
identity
C:\Program Files\Storj\Storage Node\config.yaml:59:identity.key-path:
C:\Users\stuart\AppData\Roaming\Storj\Identity\storagenode/identity.key
C:\Program Files\Storj\Storage Node\config.yaml:94:# path to log for oom notices
C:\Program Files\Storj\Storage Node\config.yaml:133:# path to the CA cert whitelist
(peer identities must be signed by one these to be verified). this will override the
default peer whitelist
C:\Program Files\Storj\Storage Node\config.yaml:134:# server.peer-ca-whitelist-path: “”
C:\Program Files\Storj\Storage Node\config.yaml:157:# path to store data in
C:\Program Files\Storj\Storage Node\config.yaml:158:storage.path: D:\

First of all, please, stop the storagenode service either from the Services applet or from the elevated powershell:

Stop-Service storagenode

Please, give me the output of these commands:

ls d:\
ls d:\storage
ls d:\storjv3
ls d:\storjv3\storage

PS C:\Windows\system32> ls d:\

Directory: D:\

Mode LastWriteTime Length Name


d----- 17/11/2019 07:22 blobs
d----- 17/11/2019 15:47 garbage
d----- 17/11/2019 11:03 storjv3
d----- 16/11/2019 21:04 temp
-a---- 17/11/2019 16:07 458752 bandwidth.db
-a---- 16/11/2019 17:40 8192 info.db
-a---- 17/11/2019 16:07 10293248 orders.db
-a---- 16/11/2019 17:40 16384 pieceinfo.db
-a---- 16/11/2019 17:40 20480 piece_expiration.db
-a---- 17/11/2019 16:07 12288 piece_spaced_used.db
-a---- 17/11/2019 16:07 12288 reputation.db
-a---- 16/11/2019 17:46 32768 satellites.db
-a---- 17/11/2019 16:07 28672 storage_usage.db
-a---- 17/11/2019 16:07 4521984 used_serial.db

PS C:\Windows\system32> ls d:\storage
ls : Cannot find path ‘D:\storage’ because it does not exist.
At line:1 char:1

  • ls d:\storage
  •   + CategoryInfo          : ObjectNotFound: (D:\storage:String) [Get-ChildItem], ItemNotFoundException
      + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand
    
    

PS C:\Windows\system32> ls d:\storjv3

Directory: D:\storjv3

Mode LastWriteTime Length Name


d----- 16/11/2019 17:35 blobs
d----- 16/11/2019 17:35 garbage
d----- 17/11/2019 11:03 storage
d----- 16/11/2019 17:37 temp

PS C:\Windows\system32> ls d:\storjv3\storage

Directory: D:\storjv3\storage

Mode LastWriteTime Length Name


d----- 30/07/2019 15:17 blobs
d----- 16/11/2019 16:03 garbage
d----- 16/11/2019 17:27 temp
-a---- 16/11/2019 17:37 32768 bandwidth.db
-a---- 25/08/2019 14:07 835 config.yaml
-a---- 16/11/2019 17:32 8192 info.db
-a---- 04/10/2019 01:53 131072 kademlia
-a---- 16/11/2019 17:37 65536 orders.db
-a---- 16/11/2019 17:32 16384 pieceinfo.db
-a---- 16/11/2019 17:32 20480 piece_expiration.db
-a---- 16/11/2019 17:32 12288 piece_spaced_used.db
-a---- 16/11/2019 17:37 12288 reputation.db
-a---- 09/11/2019 13:08 32768 revocations.db
-a---- 16/11/2019 17:37 32768 satellites.db
-a---- 16/11/2019 17:37 28672 storage_usage.db
-a---- 16/11/2019 17:37 45056 used_serial.db

I see. You have three storage folders now, the old one in the d:\storjv3\storage, the second in the d:\storjv3 and the third in the d:\

Please, install the Notepad++ and edit the option storage.path: in the configuration file "%ProgramFiles%\Storj\Storage Node\config.yaml":

# path to store data in
storage.path: D:\storjv3\storage\

Save the configuration file.
Make sure that the docker container is stopped and removed:

docker stop -t 300 storagenode
docker rm storagenode

Open an elevated powershell and stop the service:

Stop-Service storagenode

Close the elevated powershell.
Then open the regular Powershell or cmd and execute:

robocopy /MOVE /S d:\storjv3\blobs d:\storjv3\storage\blobs
robocopy /MOVE /S d:\blobs d:\storjv3\storage\blobs

Then start the storagenode service either from the Services applet or from the elevated powershell:

Start-Service storagenode

Perhaps your usage stat will be already broken, because you may received some data to other folders and it’s not accounted in the right one, but at least you should not fail audits after that and would not be disqualified because of data lost.

2 Likes

i’m running the commands now. Is there anything i can do once this is done to check whether the node is corrected

You can post the resulting stat (small table at the end) after each command.
Only start the storagenode after the process is completed. Make sure to run only one version in the one time.
Do not run the docker container, if your storagenode service is running and vice versa.

To start the storagenode back - start the storagenode service either from the Services applet or from the elevated Powershell once all corrections are completed:

Start-Service storagenode

And check your dashboard.

1 Like
           Total    Copied   Skipped  Mismatch    FAILED    Extras
Dirs :      1236         3      1233         0         0      1212

Files : 7451 7451 0 0 0 563083
Bytes : 15.367 g 15.367 g 0 0 0 1.148 t
Times : 0:26:46 0:03:47 0:00:00 0:22:58

Speed : 72384561 Bytes/sec.
Speed : 4141.877 MegaBytes/min.
Ended : 17 November 2019 18:23:29

I missed the first table sorry

dashboard still shows incorrect data

Of course. This is expected. But at least now your node shouldn’t be disqualified because of data loss.

so am i running back at 1.6TB ish data or am i from a fresh???

You are running back your previous node - yes and you have a new data in place, but with a wrong stat.