The usage of space on dashboard doesn't match the usage on the disk

Not very pleased, but thanks for the answer. One more question though: the HDD is of 1 TB. What if I tell the storj network that I have 700 GBs more (a total of 1.7TB) even if it is not true?


It doesn’t help, the previous stat for space usage was lost because of wrong path in the storagenode configuration after migration from the docker

The storagenode checking the available space on start, it takes the allocation, subtracting the usage (by the stat from DB) and comparing with available space. If you allocated more than available, it will reduce the allocation to the actual value. But since your stat is wrong it still will think that you have used such amount of data even if this is not true anymore. This is consequence of losing the correct stat…

1 Like

Hello @Alexey and happy new year.

I did not do anything on the node for the moment being: it still is the same node.

I have seen that today the amount of available space on the dashboard is of about 14GB

Which is somehow better than the previous value of -18.0 MB. This makes me hope for the best… therefore a few questions for you.

Is that the result of the garbage collector doing his job? Will this help my node to get realistic values over time?

Since tomorrow I will have some free time I would like your help in deciding what to do, also because I have now 3x2TB drive which I could dedicate to the cause (current drive is 1TB only). I have at least 3 options in my mind:

  1. Kill the current node and install a new node on a 2TB drive.

  2. Keep the current node on the existing 1TB drive and install a new node on a 2TB drive.

  3. Move the current node on a 2TB drive.

The two more drives may be used in the future once the node is full or may not get used at all (given the restrictions we have), which is a pity as it seems a waste of resources.

As usual thanks a lot for your help.

I would recommend to refresh the dashboard with Ctrl+F5

More like yes.

if you would like to kill the node, you will lost all related held amount. So it doesn’t looks like an option.
Our recommendation is still the same - better to have each node on own HDD, unless you have a RAID1, RAID10, RAID6 already.
You can move data to a bigger HDD, if you want or create a new node on this HDD, it’s up on you.
The constraint still the same - all nodes behind the same /24 subnet of public IPs are treated as a one node for uploads and downloads, and as a separate for audits and uptime checks.

Thanks for the answer.

Are you kidding me? The main issue is that the dashboard is showing either negative space or way less space available than what it is available (ca. 800GB of dedicated space, used < 100GB, free should be at least 700 GB). But we have already analysed this in the messages above. I do not see how this should be solved by refreshing the page (I refreshed it in any case, I believe the node has received more data now and the available space has decreased to a few KB).

Either it is an option or it is not, please decide… :rofl:

I think I will go with option 3, that is to move the current node to the bigger HDD. I have checked the following page but it does not cover my case, since I am not moving to a new computer, I am just moving the data to another folder.

I believe I will:

  1. Stop the node

  2. Copy/Paste the folder to the new drive

  3. Modify the existing configuration config.yaml file which is, in my case, held at the following PATH: C:\Program Files\Storj\Storage Node with the following information:
    New path: will change the unit letter from D: to E:
    New size: will change the available space from 800 GB to 1600GB

  4. Cross fingers

  5. Restart the node


1 Like

I know, I suggested you refresh the dashboard because it looks weird :wink:
I remember about the problem with lost stat, refreshing dashboard doesn’t help to solve this problem, only the weird view.

It’s option. Just when the graceful exit will be enabled you can activate it and return your escrow and start with a new node without losing the held amount.
In case of removing the node without graceful exit, the escrow will be used to recover lost pieces to other nodes.

Hi @aseegy.
I did move my node from a 1TB drive to a 2TB once. Copying so much data takes hours and I wouldn’t recommand doing it while the node is offline as it’s not supposed to be off-the-grid: its reputation may badly suffer from it.

On linux, I used rsync to synchronize everything from the source drive to the target drive, twice (took roughly 5 hours, then 20 minutes).
Then I stopped the node, ran rsync one last time to grab the latest changes (took 2 minutes) and then restarted the node, targetting the new drive.

I’m not sure what could be used on Windows instead of rsync, but I think you should search for a similar approach to cut down your node’s downtime to a minimum.

Hi there and thanks for your message.

In this specific case I only have 100GB to me copied, so no biggie. Still you are right that it would be a good decision to use something safer instead of copy/paste.

In windows I know it is a good option to use “robocopy”. Still I have never used it and I do not know its syntax. I know there is microsoft documentation available, still it is one thing to know how to use a command and another to learn how it works.

I might give it a try, maybe with some test folders and then go for it.

at this moment with DQ disabled, just stop the node and copy the data to the new drive, repoint the paths and restart the node. Should work fine and reputation shouldn’t be impacted that much and it should recover quickly.

Please, take a look on: Migrate Existing Node with Windows 10 Powershell

@Alexey this is great, thanks!

Just to be on the safe side I repeat. With the node running I will input the following command in cmd:

D:\>robocopy /MIR D:\StorjShareV3 E:\StorjShareV3

This is the origin:

And this the destination:

I will repeat that a few times and then once more with the node turned off (once the difference is so small that is should take very little time to sync).

Thanks for your confirmation.

1 Like

Looks right for me, so go forward.

Dear @Alexey,

it worked like a charm!

Thanks a lot.

The dashboard still mentions that I have only 800 GB free while I have almost 1.5 TB but I hope in time the issue will solve by itself. :smiley:

You have 800GB free from the allocation. It’s doesn’t account your physical available space, only if you allocated more than you have a physical available free space.

Thanks for the information, I copied the data from a 1tb hard disk to an 8tb one and in the control panel it appeared to me that it had 1tb … Thanks to your information I managed to fix it!

Dear @jau89 thanks a lot for your message: I am really happy that all of my writing above managed to help you! :slight_smile: It is even more important that you were able to solve your issue, perfect.

@Alexey Sadly I cannot say the same of my node: I still have the dashboard saying that the node is full (1.6TB) while the disk properties mention that 850GB are occupied while the rest is free. Screenshot 2020-08-27 at 22.09.23

If there is nothing to do I will leave it as it is, it is just a huge waiste as, from what I understood from the past messages the rest of the network sees my node as full while in reality 750GB would still be available.


make node restart, then it will recheck space used, but it takes time.

Hello Vadim,
it has been like this since november 2019. Trust me I have restarted the computer and the node during the last 10 months a few times. :smiley:

Actually @Alexey already answered to me in the first posts, so it is not correct to bother him again with such topic. :slight_smile:

Thanks anyway for reading me.

Try to do:

  1. Stop the storagenode
  2. Rename the piece_spaced_used.db
  3. Execute with sqlite3:
sqlite3 F:\StorjShareV3\piece_spaced_used.db

When you see a sqlite> prompt, execute script:

CREATE TABLE versions (version int, commited_at text);
CREATE TABLE piece_space_used (
                                                total INTEGER NOT NULL DEFAULT 0,
                                                content_size INTEGER NOT NULL,
                                                satellite_id BLOB
CREATE UNIQUE INDEX idx_piece_space_used_satellite_id ON piece_space_used(satellite_id);
  1. Start the storagenode
  2. Check logs
  3. Check the dashboard