Disk usage discrepancy?

OS shows you TiB. Storj uses TB, in settings and dashboard. The HDD manufacturer claims the space in TB, so 1000 base, not 1024.
You can take note of the available space after formatting the drive, and installing on it whatever you need, including storjnode, than take that value X TiB - 1 TiB = Y TiB.
Transform Y TiB in Z TB and use Z for allocated space. I reserve at least 1 TB on all my drives from 8TB to 22TB. You can do whatever you want. This overhead is not just data sent in one hour from sats, it’s also from the formatting, etc. Maybe an IT specialist here can explain it better.

1 Like

It’s uses provided measure units, if you would specify TB - it will be terabyte (the base of 10), if TiB - it will be tebibyte (the base of 2).
You may request the storage free space in TiB and provide this number as a TB, it would be roughly 10% difference.
We have all checks in place and the node will reject an upload, if it has less than 500MB of free space despite the allocation. However, if we would introduce a bug, it may accept the upload and you find yourself with inability to start it again (DB uses space, but it’s not accounted as used, and if the DB cannot add a record - the node likely will crash), until you free-up some space.

1 Like

But if the HDD is dedicated to Storj, and you allocated all space, you can’t free any, and you can’t restart the node.

1 Like

yeaaa, its not so simple to be a SNO,
no “setup once, and forget” as some say here.
Rise the payouts for heroic SNOs! :wink:

1 Like

yea, but economic will not allow…
So, it’s better to be on a safe side. Not exactly 10% of free space… but at least hundreds GBs

I guess below 600GB; it will get risky.

It doesn’t have to be recovered, since GC will just clean up expired pieces once the segments expire on the satellite side.

1 Like

Hi. Developments: removing index action was running for a long time (9 days), still file walker nothing.
Logs for file walker, nothing also.
Any more thoughts?

good, how was the fragmentation of the drive?
is filewalk enabled at startup (standard) in the yaml?

be patient, filewalks need time.
wich version is the node now?

propably it just need to be defragmented or some alone time.
what was the hardware again?

“good, how was the fragmentation of the drive?”
0% Fragmented
“is filewalk enabled at startup (standard) in the yaml?”
Yes, lazy.
“wich version is the node now?”
1.96.6
“propably it just need to be defragmented or some alone time.”
It´s been like this for over a month.
“what was the hardware again?”
Windows 10 22H2, i5 and 8GB RAM, 16TB CMR HDD.

anything unusual in the logs, ?

That’s the strange part, nothing unusual…download canceled, downloaded, upload canceled and uploaded…the usual.
When it starts I can’t even see the entry “lazy file walker started” :face_with_peeking_eye:

mind to upload the logfile? for next week maybe set it to debug? and upload this too?

Not sure I can…it’s Gigabytes :smiley:
Let me see if I can’t make it on the cloud and share a link.

Gigabytes for a week? The active logfile? Thats to big.

Stop the node, rename storagenode.log to eg:oldlog.log, start the node.

It´s not for a week, it´s way over a week following @Alexey ´s advice that filewalker might take a month to finish.
Anyway, here it is: Microsoft OneDrive

There are two options:

  1. Enable the filewalker on startup (default):
  1. Enable the lazy mode (default)

The minimum log level should be info,
You may search for filewalkers progress:

Fulfilling all those requirements @Alexey and still no entries in the log file.
Actually, search for filewalker log entries was the main reason I stopped my weekly routine of deleting the logs.

This is VERY unusual. Do you have ANY entries with walk? You must have, unless you switched the log level less than a mandatory (for this case) info.

Maybe restart the node? :nerd_face:
FW runs only on start.