Why is such a great discrepancies in available disk space between the three different methods?

why is such a great discrepancies in available disk space between the three different methods of measurements…Capture2704 Capture2705 Capture2706

Available on disk

Available within assigned space, metadata not counted

Available within assigned space, metadata counted

2 Likes

This is because Storj Dashboart account in Decimal like TiB but windows show in binary.
Binary use 102410241024… but in decimal it is 100010001000… in the end difference is big.

Storj all accounts use in Decimal

TB is decimal, TiB is binary. But only Windows calls binary TiB as TB

1 Like

Thanks, always good to know something new.

actually it’s a much misunderstood topic… i ended up digging into it a lot more than i should have… however from what i could gather it seems that its down to bandwidth vs storage…

because a bus had channels and everything you add a channel the throughput / bandwidth goes up by a factor to two, so bandwidth denomination are based on bandwidth math… which is by the power of two… 1,2,4,8,16… i’m sure you are familiar with the sequence… which is why it was used in RAM… and ended up being a set standard…

however when storing data, you don’t get the magnification and thus this makes better sense when calculated in decimal… ofc for us computer commoners this is basically engineering lingo… helping them to better understand the systems they design and inherently be able to do rough calculations in their heads when considering bus speeds and what not…

so really to most it’s pretty irrelevant, however to somebody in system architecture and chip design i suspect it’s a requirement else it would put ones mind at a severe disadvantage for ease of understanding of how stuff works.

like if we say you got 1 plater on a hdd… and then add 1 you get 2 platers which are double capacity, add a 3rd and you get 3 times the original plater speed.

however if we do this for bandwidth and have 1 channel, and then add a second we get … well 2 :smiley: i’m getting to it… lol now if we add the 3rd channel we again double (factor 2) or (power of 2) which gives us 4… already in the most basic of the math counting to just 3 we get a 33% deviation in how it works… or 25% depending on which perspective of the % we want to see it from.

or atleast thus far that’s what i’ve found that actually makes sense…
took me like a week of processing… i mean pondering it… xD
makes perfect sense to use both approaches from a system architect’s perspective.

i don’t suppose it’s a big surprise to anyone that there is a big different between wires and hdd’s ofc today with ssd’s maybe taking over the market in the future… ram and hdds basically become the same… but still won’t change that bandwidth vs storage capacity works in two very different ways.

Why does so much data accumulate in the trash? Because I see that there’s a lot of data. When do the satellites decide to delete these files?

After 7 days it is deleted by your node. How much data do you see in there?

1 Like

i just don’t look… then it doesn’t bother me xD

1 Like

Okay, thank you. I have 470MB, but since I saw that the guy in the post has 3GB… I thought it was a lot

It’s pretty normal to have a few GB in the trash folder. Nothing to worry about. And it pales in comparison to the total storage available.

1 Like

11 posts were split to a new topic: Decimal units or binary GB vs GiB, TB vs TiB,

I have now 465GB on one of my node trash folder :wink:

My system report a negative 126.2 GB disk space available on the dashboard, 369.02 disk space remaining in the web interface, linux report 332G available on the disk and the web interface show about 20TB*H used every day in the last few days or 833GB used and pay on a 1.8TB drive.
Any idea on which number I can trust in this mess?

the logs will show yet another number every time it starts a download / ingress
it shows the number of bytes it thinks that is left on the drive…

the dashboard will show the remaining of what you defined the max space allow…

linux shows the free space on the volume, tho keep in mind this doesn’t include the 10% you need to have so the storagenode doesn’t try to store data when it runs out of space.

the web interface is an estimate…

use the linux number … - 10% of the storagenode full size, to get the maximum recommeded space utilization.

the log showing wrong numbers can be an issue tho… if the node runs for a long time it may sometime think it’s running out of space even when it isn’t… a reboot of the node fixes it.

The logs does’nt show anything, with a negative 126GB I don’t get any ingress.

The dashboard should show the remaining of what I defined the max space I allowed, the only problem is I defined a 1.8TB drive it shows its using 1.9TB but only 1.6TB is used on the drive.

If you had those kind of numbers on your bank statement you would worry about the bank reliability.

We are dealing with math, an exact science, not estimate or thinking.

If my bank statement had different currencies, I’d do the conversion. 1.6TiB = 1.76TB which rounds to 1.8TB. The higher usage shown is because trash due to garbage collection is counted twice. Restart your node and wait a bit and that will correct itself. But since you mentioned bank statements, your payout doesn’t depend on these intermediate local stats, it depends on the satellites accounting, which is based on the pieces and size you actually hold. You get the paystub info for that after payout, which would be closer to your bank statement.