Disk usage discrepancy?

And what you would do, if something goes wrong?

1 Like

I don’t know if I understood it correctly, but are you talking about what I will do if the day never comes when it becomes as easy as Windows? If that’s the case, I’ll have to run it alone, getting help from the forum like I do now! If it were easy, I would try to explain it to my friends, but it was too difficult for me to recommend!

I don’t live in an English-speaking country, so I have trouble interpreting it. Please understand even if I give a strange answer. :sweat_smile:

My disk usage seems to be far off.
afbeelding

df -h reports 18TB disk usage.

I’ve enabled the filewalker at startup and they seem to have finished
storagenode | 2024-07-02T08:42:50+02:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess finished successfully {“Process”: “storagenode”, “satelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
storagenode | 2024-07-02T08:42:50+02:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess finished successfully {“Process”: “storagenode”, “satelliteID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”}
storagenode | 2024-07-02T08:42:51+02:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess finished successfully {“Process”: “storagenode”, “satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
storagenode | 2024-07-02T08:42:51+02:00 INFO lazyfilewalker.trash-cleanup-filewalker subprocess finished successfully {“Process”: “storagenode”, “satelliteID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}

I ended up lowering my available storage to prevent new data from coming in, hoping the database would catch up. however nothing seems to be happening aside from downloads.

You suggested to have a simplified installer. What you would do, if something would go wrong with your node?
You do not know anything about the setup, right? You just pressed the Next. What you would do?
I guess the first thing what you would do, as almost any Windows user you would reinstall the node right? Removing all data and the identity.
We do not want this, so, sorry - no “Next” button any time soon, we need to have SNO, who is able to troubleshoot and fix the simple issues like port forwarding, or even more complex one - optimize the filesystem or fix errors on the disk, etc.

1 Like

Please use df --si to have the same measure units, as on the dashboard.
However, it will show the used space on the disk, but not in the allocation.

To fix the discrepancy you need to enable a used-space-filewalker on startup if you disabled it (it’s enabled by default), restart the node and check that filewalkers are progressing without issues and errors related either to a filewalker or databases.

Did you enable the scan on startup?
Did you disable the lazy mode? If so, then the filewalker wouldn’t report to logs, you need to track it differently

I don’t know what SNO means, but it seems like it’s difficult to implement right now. I’m not suggesting it, but I’m just saying that I hope a convenient day will come in the future! :wink: Thank you for your hard work for the development of STORJ.

1 Like

SNO means Storage Node Operator.

It’s not difficult, but requires knowledge from SNO how to troubleshoot issues. AI is not so useful so far.
We have had a simplified setup in the past (Storj v2). And it introduced so many issues and nodes churn, so we decided to do not repeat this experience once more time.

1 Like

It’s surprising that you’ve already implemented a simple method in the past…
Thank you for letting me know what SNO means.

Like you said, I will try to expand my knowledge on problem solving, but since you answered well,
I am slowly following along and learning. Let’s be together for a long time! :heart_eyes:

1 Like


this has been done anything else.
is there a way i can force Storj to re-check the storage being used i remember the overused memory was 200gb+ & 200gb+ in Trash.
Now i have updated the storj space to 5 tb

so it doesnt try to exceed and cause more errors

Storj dashboard is not showing almost 1 TB of data on the drive
Is it possible to have a pinned post or something so we follow these steps for this issue.

Will the scan run only on node restart (and enabled walker in the config) or will it recalculate the used space at random or so even without restarting the node?
Will it fix itself with the disabled walker at startup or do I have to enable it and let it finish?
(I have the scan on startup disabled at the moment)

Ps: looks like satellites aren’t reporting usage again

If once upon a time the database was on hdd and there were many errors related to the locked database, downloaded files that did not get into the database are added after the filewalker completed? Or are they not taken into account and will be an eternal burden on the disk?

SNO meaning it’s a secret well kept. Only the high ierarchy knows the true meaning.
https://forum.storj.io/t/what-sno-means/23633?u=snorkel

1 Like

So, I really think there might be a bug with the latest versions of Storj. I have turned off lazyfirewalker and left things running for a few days. Nothing has changed in my drive space. I still show the following for the properties of the Storj folder:
Size = 30tb
Size on disk = 37tb

However, my dashboard shows:
Used 23.29tb
free 0 (I dropped my allocated space to 23tb as to not fill things up)
Trash 240.31gb
Overused 534.71gb
If I add those numbers up it come out to around 24tb so somewhere there is around 6tb not accounted for by the node, but the disk sees it. I have also moved my db’s to my SSD drive as well and have run multiple disk checks on the file system which shows no errors.
I run my node from a windows box with isci connections to an 8 bay Synology NAS. It has its own network for the iscis traffic and a read cache assigned to it with internal SSDs. All bays are filled with WD Red drives in a raid 5 configuration. I never have issues with I/O on it. I’ve not had any issues with Storj and storage until recently and I’ve been running nodes for years. I don’t see any errors in the logs except for downloads here and there not completing because the remote party didn’t respond in time.

1 Like

That sounds like an issue with the filesystem. I am using ZFS with a recordsize=1M. Any piece that is smaller than 1M will be padded by ZFS. These extra bytes are not seen by the storage node. Solution: Enable compression and this filesystem overhead is gone.

You might have something similar going on with your filesystem.

1 Like

I am not sure why this would all the sudden appear though. I’ve been running this setup for years. I also had 30tb allotted to storj when the 1.104.x version first came out. I noticed my node filling up quickly when at 24 so upped it to the 30tb amount. Then after it got close to filling that up it dropped way down. The trash never showed this the stored amount just decreased and everything stayed the same on the dashboard but the disk still showed the same usage. So I know I have some overhead with the files, but I don’t think that would add up to the discrepancy. FYI this runs on NTFS.

In my example with ZFS the overhead would be almost not noticeable as long as the customer uploads are matching my recordsize. The moment the upload pattern changes to smaller pieces I might see a higher overhead. (or just enable compression and don’t worry about it)

1 Like

So Not Ordinary :grin:
(20 characters)

My node also does not recognize about 6 terabytes after the update. I’m turning off lazyfirewalker and waiting for the task to complete. I hope it goes well…

Understood but windows properties on my storj folder show two amounts the size of the folder 30tb which is the sum of all the actual file sizes and it also shows the size on disk 37tb. My issue is that the Size of 30tb should be reflected in Storj because that is what the actual files add up to and not what it takes up on the disk because of factors like cluster size and overhead. However my node only shows 24tb of combined data which I understand will not equal the actual size on disk but it should be pretty close to the size field in the properties in the folder and its 6tb off

@Alexey is it possible to start somehow old stile filewalker? it worked faster, node is full now no worry about IO