Huge allocation overwritten by storagenode

If you allocate more space than the drive’s capacity, the storagenode will overwrite the value with the real capacity of the drive and display it on the dashboard.
So, if you see a discrepancy between dashboard and OS, about occupied space (like in the recent trash problems, when the database isn’t updated after the TTL pieces are deleted), the recomended workaround is to allocate more space than drive’s capacity, if the drive is only used for storagenode, to still get ingress.
It seems that the software ignores the value if it’s bigger than the actual capacity of the drive, so the workaround dosen’t work.
I see this in ver 105, on Ubuntu Server/Docker.
I have 2 drives of 22TB used only for storj.
As I expirienced the usage discrepancy, I set 40TB allocated space for each, to continuu geting ingress, because I had enough free space, wrongly reported as occupied.
But the dashboard shows 21.9TB allocated space.

In your example did the OS also show the extra fake capacity (40TB)… or only the real 22TB? I can’t remember if you’re one of the people experimenting with ZFS: but it lets you make ‘sparse volumes’ where the reported size is larger than the real size.

(So you could make 22TB of real space report as 40TB of space at the OS level… then as you actually start to fill you can add more real space later so you never really run out)

1 Like

Has been mentioned already here:

It seems there was a fix that now prevents the override that would help to bandaid the problems that come from the usage discrepancies.

So this is the wanted behavior, introduced in 1.98 version. I don’t know why they did this, but should be reverted back.
I use ext4, and the drive has 22TB.

Node always select the minimum of the “allocated” and “used + free (in the allocation)” and “used + free (on the disk)”.
Of course the “used” value is taken from the databases…
However, it’s calculated only on restart as far as I know.