New storagenode on Synology DS220+; choosing filesystem and some ROI estimates

PART I. New node and Filesystem settings for DSM Docker.
I’m ready to start a new storagenode on a Synology DS220+ 6GB RAM, with 1x Exos x16 16TB HDD, in Docker. This machine will run only the storagenode, and I don’t plan to run multinode or add another drive.
I’m used to Windows and I don’t understand the file system options in Linux.
After installing DSM 7.x, I’m faced with choosing between different types of RAID and FS. What should I choose, and what are the best settings for the storagenode?
I read about CoW on btrfs and Journal on ext4 that must be deactivated. How to do that? The posts that I saw are old and don’t give a final answer; some vote for one FS, some for the other. My first Synology Docker node (DS216+ 1GB RAM) is running on SHR with ext4 FS, has 1.2 TB occupied, and is continuously making that scratchy sound. I have an identical HDD, that runs a storagenode with 2.5TB occupied, on a Windows machine (Ironwolf 8TB), and dosen’t make any noises. If I chose the wrong settings, the new HDD will make the same awful grinding sound and maybe won’t perform well.
What do you recommend? The options are:

A. RAID type: SHR, Basic, JBOD.
B. Filesystem: btrfs, ext4.

Btrfs options:

  1. Record file access time: Daily, Monthly, Never.
  2. Usage detail analysis: Enable, Disable.
  3. Low capacity notification: XX%.
  4. Data Scrubbing schedule: Enable (with period), Disable.
  5. Space reclamation schedule.
  6. RAID Resync speed limits: lower impact on system performance, resync faster, custom.
  7. Fast Repair: enable, disable.
  8. Enable write cache.
  9. Bad sector warning.
  10. File System Defragmentation: Run or Not.

Ext4 options:

  1. Record file access time: Daily, Monthly, Never.
  2. Low capacity notification: XX%.
  3. Data Scrubbing schedule: Enable (with period), Disable.
  4. RAID Resync speed limits: lower impact on system performance, resync faster, custom.
  5. Fast Repair: enable, disable.
  6. Enable write cache.
  7. Bad sector warning.

PART II. Earnings/ROI estimates.
I have 2 nodes runnig in different locations, 1 node per subnet, internet on optic fiber with over 500Mbps Up/Down, in Europe.
NODE I. Age 12 months, space occupied 2.53/7TB, total earned 45$.
In month 12, I see: Download 253GB, Repair 43GB, Disk avg 2.4TB, Gross total 9.1$.
NODE II. Age 7 months, space occupied 1.17/7TB, total earned 16.7$.
In month 7, I see: Download 273GB, Repair 5.8GB, Disk avg 1.08TB, Gross total 7.15$.
So it seems that engress dosen’t scale proportionaly with the storage occupied.

I made some estimates in an excel table, after my expirience with this nodes, using a fixed engress (it’s impossible to predict future estimates, ingress or engress, you can only make some estimates according to past data); it looks to me too optimistic though:

|month|downloads TB/month|repairs TB/month|storage TB/month|total storage TB|earnings $/month|total earnings $|
|1|0.015|0.001|0.012|0.012|0.31|0.31|
|12|0.265|0.025|0.235|2.597|9.27|71.54|
|24|0.265|0.025|0.235|5.417|13.5|210.27|
|36|0.265|0.025|0.235|8.237|17.73|399.76|
|48|0.265|0.025|0.235|11.057|21.96|640|
|60|0.265|0.025|0.235|13.877|26.19|931.01|

I advise against btrfs based on my own experiences: -t 300 not enough - #25 by Toyoo

1 Like

Assuming this is really for Storj only I would create a basic volume on each disk and run 2 separate nodes on it. Using any form of raid just wastes half your space or causes your entire node to fail on a single disk failure.

Either should work, but you’re not really going to use advanced features of btrfs. Perhaps just go for ext4 as a tried and true option.

Never, this will save a lot of write op when file walker or garbage collection is busy

As desired. You shouldn’t let storj use more than 90%. Setting a warning at 95% can help catch issues should your total usage grow beyond that anyway for any reason

I don’t think this option is available on basic single disk volumes with ext4. Should you go with another setup and this is available, I would enable it.

Same as above, but should you choose a raid setup anyway, choose lower impact on performance option

Not sure what this one does, does it provide more info?

Yes, unless you have unreliable power and no UPS. This will significantly help with performance, but ungraceful power downs risk damaging your data. Usually not vital for storj as one or two missing pieces isn’t a big issue, but databases could get corrupt which is more nasty. If you can’t ensure ungraceful power downs don’t happen, disable this and take the performance hit.

Enable

For earnings estimates over time check this topic: Realistic earnings estimator

In DSM:
Storage Pool > Global Settings > Advanced Repair Settings >
Fast Repair
Compared with Regular Repair, Fast Repair shortens the time needed for a degraded storage pool to recover to a healty status and skips unused spaces in a storage pool to accelerate the repair speed.

On website:
Fast Repair is supported on storage pools on which volumes have been created, and can be enabled to accelerate the storage pool repair process. Compared with the traditional rebuild method, Fast Repair skips the unused spaces in a storage pool to accelerate the repair speed and resume RAID protection as fast as possible. This option is enabled by default.

Note:

  • Only storage pools with data protection (i.e., RAID 1, RAID 5, RAID 6, RAID 10, RAID F1, SHR-1, and SHR-2) support Fast Repair.
  • The repair process cannot be accelerated when Block-Level LUNs on storage pools are being repaired. The Block-Level LUN feature is only supported on DSM 6.1 and versions below, but are still functional when a DSM version is upgraded to DSM 6.2 and versions above.
  • You can manually run data scrubbing to optimize a storage pool if the storage pool is in RAID 5, RAID 6, or RAID F1 configuration and if storage pool optimization is not performed after Fast Repair is complete.

I think this applys also to a RAID setup. It’s in DSM 7.x, not in 6.x.

That must be why I haven’t seen it before. I’ve been putting off the update because it will require me to go through a lot of code to translate deprecated code in an old project of mine to what is supported by new php versions in that build. I’m not looking forward to that.

Anyway, I see no reason not to use that if you do go with raid, but as previously mentioned I would advise against that in your scenario.

Yes, this machine is only for Storagenode. I know the ROI will be in aprox 5 years, but SH I couldn’t find anything localy, and I like new things :slight_smile: I come from mining field, with rigs and asics, and after I sold them all, I missed that… I can’t stay away from money making machines :blush:
No RAID for me. If I buy more HDD’s, they will be used for new nodes, not for redundancy. I have many locations with good Internet, where I can put them.

1 Like

I have a similar setup, DS920+, using 4 16TB Exos drive in SHR1 and I use btrfs. I run several things on the server. As a NAS, HomeAssistant, Deconz. I might wear the drives earlier.