Disk usage discrepancy?

If you have a “database is locked” errors, then the usage will be wrong. You need either optimize the filesystem or move databases to the different less loaded disk/SSD.

This is mean that the cluster size is 8KiB, if the total disk size is less than 32TiB and 16KiB if it’s greater or equal 32TiB see hard drive - Get the default cluster size for NTFS - Super User.

Then you need either optimize your filesystem or disable the lazy mode and restart the node.

as designed.

Yes, because it would be updated by scan on startup, or likely the updates for databases before restart were lost. Did you have “database is locked” errors in the logs?

In that case you may track its progress in logs. Only when the lazy mode is off you need to use an alternative methods, because non-lazy filewalkers are much less verbose.

This is already confirmed by many as a worst performant setup. Either no VM or use Linux.

I’ve never had lazy fw enabled. Because I wanted the operation to end quickly. The disks are all NTFS obviously. Without any cache or anything.

1 Like

In fact, I had already indicated that the configuration is not optimal but I already have the infrastructure, so I took advantage of it.

if I remember true, that Lazy filewalker is default today, so if you not made is false, then you using it.

1 Like

Maybe I expressed myself badly. In the sense that I never enabled it but because I inserted the configuration line in each node:

pieces.enable-lazy-filewalker: false

2 Likes

i have not changed anything related to that and i think its enabled.
Is thr a way i can i check if its running

I understand all the talk going around (i Think) old nodes who Never do file Walker, wrong file systems, corrupted dB and so on.

But HOW is this happening on a node that is 1,5 weeks old! Anyone with new nodes seeing this?

Might to have something with NTFS cluster size, where if it is set to be too big, small files are taking more drive space than actually needed.
But not sure, not running any Windows node, but I believe it was discussed here on the forum as well.

1 Like

Yea it does sound like a windows problem. I have no issues on ZFS. I noticed some differences between old and new nodes in terms of average piece size but everything within expectation.

DB’s has long been on SSD. I am interested in what with the files that were with errors “database is locked”? Will they be added by the filewalker to DB? Or will they be ignored and stored on disk for free forever?

maybe - but then storj should look for a solution for this (there might be one) i dont know if i can adjust the cluster size on a running node. no clue.

windows problem - maybe and thats fine. im just looking for a solution because its getting out of hand and not making any sense. its alot of wasted space that should be put to use. and this should not be SO apparent on a BRAND new node - i understand if there is a few % of discepancy - but this is insane (if the numbers are to be belived in dashboard).

This is when you format the drive, you can select the size. Not sure if it can be changed on the fly, or even without reformatting the drive again.
But can you check the current cluster size? In CMD you can do fsutil fsinfo ntfsinfo C: for C drive for example, to see if this might be the problem.

1 Like

Yes, check the logs:

They would try. However, if they would have a “database is locked” when they finished scan and wanted to update the database, then unfortunately not.