Disk usage discrepancy?

client can do it only by support tiket i think.

I noticed a significant increase in trash volume a few month ago. Since then most time I have like 10% of trash. The recent ‘anomaly’ comes on top of this.

Since trash emptying runs once a day, you should check whether your trash folder actually contains the amount the dashboard says. It could be a display/database bug. I experienced this on one problematic node, and manually corrected the database (I have used-space filewalker disabled).

1 Like

No, nothing, no errors.

Additionally You can find a great explanations by Mitsos here,
read thread from this post down: Bloom dont work or why my trash is empty? - #36 by Mitsos

Also we have official explanations, what actually happened here:

That’s shines a lot of light to the situation.
In short, from now on, the situation should be getting better and better by the week!

If your node got restarted you will not see “good” value for trash, because filewalker has to rerun and it will update trash value for a good one. So if you have a big node and you restarted it, the trash might be wrong for few days or even more

1 Like

Now, since my fast node grew from 3 (looking normal) to 5.5 TB i have this. 1.97.3

Windows agreed with the 5.1-5.6 TB used.

It’s been 5 days.
Only had 2 retain logs from 1 satellite. (Removed ~17GB)
Not one log from gc-filewalker, used-space-filewalker, or piece:trash.
No db errors anymore either or any other errors in general.
Logging is set to info.
Filewalker lazy is false.

Then please check them all

Because there could be only a few reasons of not updated databases:

  1. DBs are broken
  2. You have database locks which prevents them from updating
  3. the lazy filewalkers fail during the run
  4. The node has been restarted in the middle of the filewalker process (any) - they will start over when their trigger would appear (they do not have a memory about already scanned pieces, because it’s wiped out during restart)
  5. You have FATAL errors, which causes stop or restart the node and reset of all running filewalkers progress
  6. You disabled a used-space-filewalker on start but have any of problems above or a side usage (chia mining, etc.)
  7. You didn’t perform How To Forget Untrusted Satellites
  8. You use a network filesystem (any) or heterogenic virtualization (Linux on Windows or Windows on Linux) or NTFS under Linux or exFAT on any OS (or any other filesystem with a huge cluster size).

then it’s not finished yet or not started at all. You should have a start log line and finish log line for each filewalker and each satellite. They may work for several days.
For example, for a lazy filewalker on my biggest node:

2024-03-04T02:51:41Z    INFO    retain  Moved pieces to trash during retain     {"process": "storagenode", "Deleted pieces": 452481, "Failed to delete": 504, "Pieces failed to read": 0, "Pieces count": 11509460, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "65h26m46.5101556s", "Retain Status": "enabled"}

“Duration”: “65h26m46.5101556s”

Hi @Alexey

Are there any plans to add the feature of filewalker resuming where it ended last time?

As I understand the code for this was completed already and will probably be deployed with version 1.101.3.

1 Like

yea, its in 1.101.3 its a pre-relase on storj’s github now, for brave ones,
a brave heros of storj pls install it and after a week tell us, if it worked :smiley:

any way to use that version with docker? would try it for sure because im very close to delete the whole node… its annoying af this damn bug

edit: started Graceful Exit because i have 17$ held… after GE is done (in 30 days) ill start a new node



I have this link.
However, the latest versions no longer seem to appear there. I would like to ask for an update where you can download the latest versions to test.

The docker container is static and doesn’t follow storagenode versions, so that updates can be automatic. On startup it queries https://version.storj.io and downloads the latest binary.

You could manually set the version by running bash inside the container and pulling the latest version yourself, or maintaining your own version URL and changing the VERSION_SERVER_URL environment variable to point to it.

1 Like

nvm had no hope that it will fix itself, as said in edit: i started GE and will start a new node when GE is done

edit: anyone know if i can start a new node now when GE is still running?

Yes, you can run multiple nodes. Each additional node needs a new identity. You can use the same email to create these new identities.


Not sure but looks like GE is not really working… Should the used space not gets lower? Since i started GE not a single GB was removed