PART I. New node and Filesystem settings for DSM Docker.
I’m ready to start a new storagenode on a Synology DS220+ 6GB RAM, with 1x Exos x16 16TB HDD, in Docker. This machine will run only the storagenode, and I don’t plan to run multinode or add another drive.
I’m used to Windows and I don’t understand the file system options in Linux.
After installing DSM 7.x, I’m faced with choosing between different types of RAID and FS. What should I choose, and what are the best settings for the storagenode?
I read about CoW on btrfs and Journal on ext4 that must be deactivated. How to do that? The posts that I saw are old and don’t give a final answer; some vote for one FS, some for the other. My first Synology Docker node (DS216+ 1GB RAM) is running on SHR with ext4 FS, has 1.2 TB occupied, and is continuously making that scratchy sound. I have an identical HDD, that runs a storagenode with 2.5TB occupied, on a Windows machine (Ironwolf 8TB), and dosen’t make any noises. If I chose the wrong settings, the new HDD will make the same awful grinding sound and maybe won’t perform well.
What do you recommend? The options are:
A. RAID type: SHR, Basic, JBOD.
B. Filesystem: btrfs, ext4.
Btrfs options:
- Record file access time: Daily, Monthly, Never.
- Usage detail analysis: Enable, Disable.
- Low capacity notification: XX%.
- Data Scrubbing schedule: Enable (with period), Disable.
- Space reclamation schedule.
- RAID Resync speed limits: lower impact on system performance, resync faster, custom.
- Fast Repair: enable, disable.
- Enable write cache.
- Bad sector warning.
- File System Defragmentation: Run or Not.
Ext4 options:
- Record file access time: Daily, Monthly, Never.
- Low capacity notification: XX%.
- Data Scrubbing schedule: Enable (with period), Disable.
- RAID Resync speed limits: lower impact on system performance, resync faster, custom.
- Fast Repair: enable, disable.
- Enable write cache.
- Bad sector warning.
PART II. Earnings/ROI estimates.
I have 2 nodes runnig in different locations, 1 node per subnet, internet on optic fiber with over 500Mbps Up/Down, in Europe.
NODE I. Age 12 months, space occupied 2.53/7TB, total earned 45$.
In month 12, I see: Download 253GB, Repair 43GB, Disk avg 2.4TB, Gross total 9.1$.
NODE II. Age 7 months, space occupied 1.17/7TB, total earned 16.7$.
In month 7, I see: Download 273GB, Repair 5.8GB, Disk avg 1.08TB, Gross total 7.15$.
So it seems that engress dosen’t scale proportionaly with the storage occupied.
I made some estimates in an excel table, after my expirience with this nodes, using a fixed engress (it’s impossible to predict future estimates, ingress or engress, you can only make some estimates according to past data); it looks to me too optimistic though:
|month|downloads TB/month|repairs TB/month|storage TB/month|total storage TB|earnings $/month|total earnings $|
|1|0.015|0.001|0.012|0.012|0.31|0.31|
|12|0.265|0.025|0.235|2.597|9.27|71.54|
|24|0.265|0.025|0.235|5.417|13.5|210.27|
|36|0.265|0.025|0.235|8.237|17.73|399.76|
|48|0.265|0.025|0.235|11.057|21.96|640|
|60|0.265|0.025|0.235|13.877|26.19|931.01|