Large nodes (8-10TB) on synology NAS struggling with all updates

It seems like every time there is an update that my nodes really really struggle. Sometimes causing the entire synology to be unresponsive. I think this could be due to the filewalker process after watchtower reboots a node.

Is there any way that we can get some control over this process (I don’t know tell it go to slower, or use less memory or something)?

1 Like

I don’t think there is anything that you can do to change the process. I am guessing if you were to run a separate watchtower instance for each node on the NAS, the random time interval for update checks would be different and this might help you. But this may only be a short term solution since the transition to a native updater does not seem far off.

Perhaps once the new updater comes out your nodes will also be updated at different intervals as they work differently than watchtower based on a staggered rollout. Perhaps someone running multiple Windows nodes has some insight into how multiple updates work for their setups.

2 Likes

If you have an SSD try to move the node DB to it. This helped me lower the iowait in a RAID 6 array.

Not directly related but if you want to monitor your disk(s) I/O with each option you could use something like netdata docker.

EDIT: You can also have watchtower configured to restart a node at a time, this in conjunction with a lifecycle hook should give you better control on the restarts. You can for example restart a node and wait 5 minutes to restart the next one, this should decrease drastically your issues. + info

Is that for letting time for the filewalker to complete its task? Because that may not be enough…
YMMV but my poor 2.5" 2TB SMR disk takes 1h30 to rescan all files! ^^’

I did not know we could do that, cool :+1:

An SMR disk probably isn’t the best option to run something this random I/O intensive.

Given the above you should have even better performance if you move the DB to an SSD or other HDD as I suggested earlier.

Yup I know :wink:
But that’s what I had, so… I made it work! :slight_smile:

Can’t argue with that :stuck_out_tongue: you did well ! :slight_smile:

1 Like

Yeah its stuck in an endless reboot loop. Need to shutdown all the docker stuff and check for corrupted databases.

Needless to say my little synology got overwhelmed.

Which one do you have? I’m just starting up a second one (2 disk, DS220+ … will also be able to share 5TB to 7TB of the total 10)… But I hoped it’ll be strong enough…

How much RAM do you have?
If you use SMR Drives (which is Slow and dangerous for Raid) and Your NAS starts swapping - its over.
Hand this often with to low RAM and Seagate SMR Drives (hell do i hate this Seagate SMR Drives, they are incredibly Slow).

1816+ with 4x16TB drives in it currently.

16TB

I do not use SMR drives in this Synology.

16TB of RAM? Really? :upside_down_face:

heh, sorry i’m new around here megabit, megabyte, terrabyte whats the difference :stuck_out_tongue:

But no - unfortunately i have a low end Synology that technically takes 8GB of RAM but i managed to get it to take 16GB instead (not that i’m so magician i just googled it).

1 Like

Will compare CPU power but wait and i/o is maybe the biggest issue. But you also write you run several nodes on it? Did you make one big SHR? Or which type of Raid?
Just wondering if you sit behind one IP, you have one Synology and it’s a SHR no need to separate unless you connected the HDDs separately? That might also change performance to just run one node?