I do my estimates in the following way.
First, estimate the number of pieces you expect to have.
- You can look at the average segment size here. As of writing this post this is 7.25 MB.
- Divide it by 29 (the number of pieces required to reconstruct a segment) to get average piece size. We get 250 kB.
- Divide your allocated disk space by the average piece size. For 1 TB this is 4M pieces expected.
Now, estimate the amount of RAM you need per file. These numbers depend a lot on your software stack, as any additional layer (like storage spaces, RAIDs, whatever) adds their own requirements. Assuming you have a file system set up directly on a raw partition of a single HDD, this would be:
- For default ext4 this is around 300 bytes (inode + direntry data structures).
- For default NTFS this is probably around 1kB.
You multiply the number of pieces expected by the amount of RAM you need per file to get the estimate.
Remember, this is an estimate of the amount of RAM that needs to be free after your OS, the node itself (assume less than 0.5 GB), and all other software running on the same system take their chunks.