While waiting for a response I did stop the node to expand the array, it’s doing a parity check which is taking a long time. Based on the current speed it might need 6+ hours or more…
The question remains:
Can I still run docker while the array expand process is in-progress and doing a parity check?
EDIT: 3hrs in and it’s only done 5%~ parity check
With the node offline I might have 24hrs down time to just add 1 HDD. Seems a bit excessive for a Synology NAS?
You can use all file based services on a Synology NAS while a rebuild/scrubbing/parity check process is running, so your container might work fine as well. It may of course react a little slower while being busy doing such actions.
It does make sense that you have to complete expanding the volume so at that point the volume would be restarted to re-allocate, but… This is the first time I am doing it with a Synology NAS so unsure how this is all handled.
You can run storagenode during recalculation/expanding… But its better to stop container if you click to extend/expand… Not sure but maybe stop docker for seconds…
If you have 3 disks in SHR1 you have “RAID 5” on this 3 disks
If you have 2 disks in SHR1 you have “RAID1”
If you have 2x 12TB and 1x16TB you have 24TB RAID 5 storagepool and you have unused 4TB from 16TB…but if you add next 16TB disk you have 36TB RAID 5 and 4TB RAID 1 in SHR this is the power of SHR
I have SHR1 4x 8TB disk and its same as 4x 8TB RAID 5…but if i have 2x8TB and 2x12TB its better to have SHR1
Yep, it took 2 and a half days to complete the expansion of the array. Was blown away how slow this went. Once the parity check was complete and the new allocation of space was available. I simply shutdown the node and then expanded the array with the newly available space which took less than 20 secs to complete and node was back online.
Repeating the same process again to add the 4th HDD
Going from 10TB to 36TB available for StorJ
Mind you my 10TB was only filled up to 6TB~
You’re looking at the storage pool, of which 100% of space is assigned to volumes. But the volumes will still have free space and are perfectly usable.
SHR1 uses RAID1 for 2 disks and RAID5 for 3 or more disks. So when you expand from 2 to 3 disks, it’s converting RAID1 to RAID5.
But yeah, I’ve never stopped my node when expanding my SHR2 array. Which takes even longer. The node will work just fine, but you might lose a few more races to faster nodes. Nothing to worry about though.