Should docker be shutdown when adding new HDD to expand Arraw / storage volume?

I’m running a Synology NAS with 4 drive bays, currently only 2 drive bays are used. I have purchased 2 additional HDD’s to expand my volume storage.

Should I stop the docker node and all activity on the NAS while adding a new HDD to the existing storage volume / array?

While waiting for a response I did stop the node to expand the array, it’s doing a parity check which is taking a long time. Based on the current speed it might need 6+ hours or more…

The question remains:

  • Can I still run docker while the array expand process is in-progress and doing a parity check?

EDIT: 3hrs in and it’s only done 5%~ parity check
With the node offline I might have 24hrs down time to just add 1 HDD. Seems a bit excessive for a Synology NAS?

You can use all file based services on a Synology NAS while a rebuild/scrubbing/parity check process is running, so your container might work fine as well. It may of course react a little slower while being busy doing such actions.

1 Like

Are you 100% sure about that? Because available capacity shows 0 at present while the expanding is in progress.

Also this seems really slow :frowning:

I expand from 2 disks SHR to 3 disks and after that to 4 disks shr on DS918+

All 8TB CMR drives but expanding/ recalculation was very slow… Maybe 2-3 days…

I run node in this time and everything OK but SHR storage was very slow…

You see storagepool… Its 100% allocated its OK
Just see Volumes…

I am not sure but after expanding storagepool you need to expand volume

It does make sense that you have to complete expanding the volume so at that point the volume would be restarted to re-allocate, but… This is the first time I am doing it with a Synology NAS so unsure how this is all handled.

I have the same NAS as you.

  1. Expand storagepool
  2. Expand volume

You can run storagenode during recalculation/expanding… But its better to stop container if you click to extend/expand… Not sure but maybe stop docker for seconds…

Cool thanks,
So I made a mistake and stopped the node while it was expanding the storagepool which is doing the parity check and will take aaaages.

Glad that I know I can still run the node, as I don’t like maintenance downtime for so long. Seems unreasonable.

:+1:

1 Like

Yes you recalculation RAID 1 to RAID 5 its difficult to disks :grinning:

1 Like

SHR1, not RAID1 or 5.

If you have 3 disks in SHR1 you have “RAID 5” on this 3 disks
If you have 2 disks in SHR1 you have “RAID1”

If you have 2x 12TB and 1x16TB you have 24TB RAID 5 storagepool and you have unused 4TB from 16TB…but if you add next 16TB disk you have 36TB RAID 5 and 4TB RAID 1 in SHR this is the power of SHR :+1:

I have SHR1 4x 8TB disk and its same as 4x 8TB RAID 5…but if i have 2x8TB and 2x12TB its better to have SHR1

Yea, SHR1 is awesome for flexibility to expand future HDD size without having a requirement for all HDD’s to be the same size etc :+1:

1 Like

Yep, it took 2 and a half days to complete the expansion of the array. Was blown away how slow this went. Once the parity check was complete and the new allocation of space was available. I simply shutdown the node and then expanded the array with the newly available space which took less than 20 secs to complete and node was back online.

Repeating the same process again to add the 4th HDD :+1:

Going from 10TB to 36TB available for StorJ :+1:
Mind you my 10TB was only filled up to 6TB~

1 Like

With 36TB you’re seeing veeeryyy long term, as with these days’ activity it’s not really possible to fill up so much storage space ^^’

Let’s hope things speed up in the coming months/years :wink:

1 Like

You’re looking at the storage pool, of which 100% of space is assigned to volumes. But the volumes will still have free space and are perfectly usable.

SHR1 uses RAID1 for 2 disks and RAID5 for 3 or more disks. So when you expand from 2 to 3 disks, it’s converting RAID1 to RAID5.

But yeah, I’ve never stopped my node when expanding my SHR2 array. Which takes even longer. The node will work just fine, but you might lose a few more races to faster nodes. Nothing to worry about though.

1 Like