"This PC" windows 10 shows 1.3TB free but CLI shows 6.7TB free

Hi all, I have read a few threads about the disk space available and really haven’t come across a clear answer to my question.

I am running a storj node on windows 10. The CLI install with docker. I have been on storj for around 2 years now or more.

I have an 8tb external drive. My CLI Dashboard shows 6.7TB free along with the dashboard that docker opens on chrome but, my computer when i go to ‘this pc’, shows that i have 1.3 TB free…

Should I be concerned? Also, I want to move this node to a new machine and to windows GUI plus add this other 8tb external I have. thoughts, please?

Check with Treesize what else is occupying space on that hard disk.

the only thing the drive is used for is my storj node. It has the one folder on it named storagenode.

roght now tree size is seeing whats ion the drive but at the bottom, it says free space 1.20tb free…

There has been many, many threads about the dashboard and space. I’d just wait a bit.

The number that storj displays as “Disk Space Remaining” or “Available” is not the amount of free space on your drive. It’s just the amount of space left that it thinks it should be allowed to use, which is based on the storage amount you used in your configuration setting.
How much space on the drive did you tell storj it could use? ( on my docker it’s the number in the -e STORAGE= part of the start command)

For example, I temporarily set my storage setting to a ridiculous number ( 633 Petabytes) which caused the dashboard to say I had 633 Petabytes free even though the hard drive is only 2 terabytes in real life.

So make sure your storage setting is not too big. You don’t want storj to overflow your drive.

I have a similar problem.
I’m using the storj node on Windows 10. CLI installation with docker.
I have an 8 Tb disk mounted, after the 1.1.1 update. a large increase in downloaded data began, but in CLI “Used” the value did not change and ‘Available’ still showed 3.3 TB of available space, even though disk space was running out.
I had to reduce the allocated space in the configuration file to 3.2 TB because the disk was full to 99%.

How can you fix this?

Try to restart the storagenode

docker restart -t 300 storagenode

Interesting. The dashboard used space does not match real life used space. The difference between the 2 numbers is almost equal to the amount of data in the pwm folder (saltlake Satellite ).

I returned to the 6.5 TB settings and restarted.
Did not help :frowning:

I’d go back down to 3.2 so your node does not overfill the drive anymore. How are your audit scores?
You can also run the first part of the graceful exit command to get more info about how much space storj thinks it’s using. You just need to exit out of it before selecting any satellites so you don’t accidentally graceful exit for real. It would look like:

The Start Graceful Exit command is: (When it asks you to enter a list of satellite domain names, DON’T DO IT, just close the window after you are done looking)
docker exec -it storagenode /app/storagenode exit-satellite --config-dir /app/config --identity-dir /app/identity

After restarting the docker, I reduced 3.2 TB again.
I used The Start Graceful Exit command, that’s the result.

Something is messed up there. In the graceful exit window storj claims it only has 539GB of data for the salt lake satellite but on your hard drive the folder for saltlake has 4.1TB inside it. Somebody with more knowledge will have to help fix that.

Thank you for your help :slight_smile:
I’ll wait, maybe someone will suggest something.

I can only suggest to check the databases:


Do not try to fix them, if they all return OK.

This is the result.

Do it for all other 12 databases

All databases are OK

Please, start the storagenode and check logs for “database is locked” (cmd):

docker logs storagenode 2>&1 | findstr "database is locked"

That is the result

Have you repaired your databases some time ago (“database disk is malformed” or problems with the migration)?

Do you have another node?

Please, check your disk for errors by the system tool.