Storage Reservation for DB Discussion

My very old node is about to finally fill 6TB! Good milestone accomplished! With the easier mgmt of a <8TB node to move around, I’ll be starting up a new node soon.

Be sure not to over-allocate space! Allow at least 10% extra for overhead. If you over-allocate space, you may corrupt your database when the system attempts to store pieces when no more physical space is actually available on your drive.

This request seems more of a hasty un-optimized estimate to keep the nodes safe. Which i do not blame one bit, but i feel this recommendation can be discussed for the network’s health.
100x 10TB nodes = 100TB returned to the network. That seems like a very realistic scenario currently.

Based on the fact that my DB files are 1.5GB, I don’t see the benefit of having nearly 1TB of excess space.

Is there also excess space needed for un-calculated “deleted” files in temp?

Notes that i added to the idea above with seperating the DB path from the Storage files:

1) The requirement to reserve 10% overhead on the storage drive can be removed.

a) On an 10TB total node, you're asking for 1TB of storage for the database overhead. This seems a little bit over the top, and would return this storage to be available to the network.

Alternative: Ask for a recommended realistic reservation. 100GB?

2) as mentioned previously, performance for both the Node and the StorJ network would improve by removing IOPS demand to the storage drive. Also resulting in less repair traffic due to nodes failing faster from the high IOPS demand on the storage volume.
2 Likes

This calculation doesn’t apply to the recommendation.

An individual node will require some level of slush storage space to account for such things as the database space, trash collection, minor space calculation issues between the SNO software and the filesystem, and various other practical issues when dealing with mounted hard drives that are nearly full of data.

It’s inappropriate to add up all the extra recommended space from multiple nodes into a large pool of Free Space for Storj to utilize. The algorithm doesn’t work that way… and the excess space suggestion is not utilized by the network. Rather, it’s a safety precaution recommendation to SNOs so that said SNOs might be able to run a node for a longer time period with fewer technical difficulties… or even disqualification due to minor differences in filesystem reporting vs. SNO space allocations.

4 Likes

i kind of agree with both of you… either argument is not unreasonable…

10% is both alot and not much… i think its a perfectly viable recommendation, but that is all it is… a recommendation, you are responsible for your node, if you got hundreds of tb i’m sure you get a sense of just how low a % you can go to… because a node that size would be much more stable in its expansion, because bandwidth becomes the limiter, and then one can most likely have time to add more storage if needed…

i think removing a 10% recommended free space beyond the node data is a mistake…

1 Like