For now, the minimum allowed free space on the storagenode drive, reported by OS, is hardcoded to 5GB (4.65GiB). Once the free space hits this limit, an instant signal is sent to satellites to stop sendind data, because the drive is full, no matter what the storagenode software/dashboard thinks, or what you set as allocated space. This limit is very useful, because the reported occupied space by the storagenode software isn’t accurate or can be buggy.
This is OK for drives used only for storagenodes.
In case of drives used for something else, like users personal files, programs, OS, databases, etc., or the best exemple - in NASes where there is also the OS and apps runing on the same drive as the storagenode, we need to be able to define a minimum allowed free space, OS reported, which we are confortable with, and that can’t be lower than the hardcoded one.
So there should be 2 values:
the hardcoded one (5-10GB), that can’t be ignored;
the user defined one, set in config or docker run, bigger or equal to the hardcoded one.
I like the idea of being able to specify only the minimum. The node won’t care what the max is… just that it always leaves some free. And the minimum is something the node can always get quickly and reliably from the OS.
In my case, running Synologys and Ubuntu servers, I can only use that trick on Ubuntu, because the OS has it’s own NVMe drive.
But Synology copies the DSM (it’s OS) on all installed drives; this way in case of failure of one drive, the machine continues working like nothing happened.
So as is doesn’t seem like a simple change like letting a user define a flag, would probably need a larger change such as sending a “has free space” bit to the satellite while having a user-controlled minimum free space parameter.
i dont care. i saw my node passig throu that 5GB barrier, up to 3GB something. Later i saw node did some more upload, so i guess satellite took care and offloaded the node to some more free space. But still that 5GB im not sure its big enough, maybe just rise it, to how much? i dont know.
I agree. I’ve set the quota on storagenode dataset on all my nodes, and set storage allocation to 200TB. It has been working correctly so far for a few weeks.
And by correctly I mean it’s making node storage management no longer my problem. It cannot exceed the quota no matter what, even if satellite keeps sending data in spite of low space.
Does anyone know if fallocate can mark some space as being “used” while allowing that space to be available for the filesystem to be used as free space as part of its operations?
I guess I want to find a simple way to trick the node that space is actually being used, while in reality the filesystem has and can use the free space.
I am using XFS/ext4 on some nodes. ZFS has quotas and can even reserve space during runtime. I am aware of XFS/ext4 quotas but wondering if this is simpler.
So you want Schrödinger disk space: both used and available at the same time?
No. It doesn’t work like that. A given sector on disk is either free or taken. It is either counted as available free space or not. If a file is sparse (which fallocate will do), it will not take any sectors on disk, so these sectors will still be counted as free.