Migrate from RAID5 to RAID6

If you make sure the copy when the node is stopped is exactly the same, you should be fine. When you first start the node, keep a close eye on the logs and make sure downloads are working fine. If not you may have an issue in your setup or a permission issue on the data. If you see any file does not exist errors, stop it immediately and investigate. You should be able to fix it as long as you have all the data.

Good luck! Hope it all goes well.

1 Like

Well, and now that I’m going to make some big changes to the server, I have a few questions. Let’s see if we can work it out a little bit together.

This picture will appear when I make the new RAID 6, and I will have to choose a band size. I understand (if I’m not going the wrong way) that the smaller the band size, the more beneficial it is for small data, but I don’t know if it will be optimal for later when I have to format the disks through Windows Server and I have to assign 64 K (The band size can always be migrated online, but it forms all the disks not).

2020-06-18_13h02_52

Then I also have the question of whether to choose NTFS or ReFS which is the other format that is available on Windows Server with the 64K format.

What is your opinion?

It’s good to know that pieces will be up to 2.21MiB in size. They can’t go beyond that, but they can be smaller. So you definitely don’t want to go with a setting that would have to write a lot more data to store a change of a single piece.

You would have 8 HDD’s total right? So 6 for storage, so that means 512*6=3072KiB would be too much. So that excludes 512 and 1024. 256 has a similar problem as it would need to update 2 blocks, which has the same overhead.

128 might be good. 128*6=768KiB blocks. 3 of those blocks are 2.25MiB, which is pretty perfect for the max piece size. There are also tons of smaller pieces though and small db reads/writes, so 64 is probably also a decent setting. I don’t think you’ll see much of a difference either way.

Perhaps someone with more experience on actually using different stripe sizes can confirm whether what I said makes sense.

As for file system, I’m really not familiar enough with the differences to say. But as far as I know it would mostly be a tradeoff between added resilience from ReFS vs higher compatibility for NTFS. I guess you won’t exactly move this array to another system anyway, so that last part may not matter. But I have no idea what the impact on performance would be or which would be better in this specific scenario. Google is your friend I guess. :wink:

I also think that leaving it at 64 the band size would be the most optimal, since it would also coincide with the format of the records which should also be 64 K.

And what you also comment, is the issue of compatibility with different applications that may be reading the ReFS format, and that probably gives problems, so the best option will still be NTFS. Thank you.

I think I would do the same on both choices. My current array uses 64 as well.

1 Like

I think it’s going to be impossible to upload the files to the cloud. I have over 1 million files, and the upload is quite slow because small files cost more to search, and the upload is done at 50 Mb. With other types of digital files such as movies, it goes up to 600 Mb. I can stay for a month to do the whole process, without taking into account that I can be disqualified later for the time the node will be stopped. That’s impossible. Either I break the RAID or I leave it like this and we’ll see the future.

You could also zip locally (without compression, it will just slow down the process and the data isn’t compressible), then upload. Though you need at least double the space in order to do that and the node would have to be offline for the entire process.

No, I can’t do that either. The only thing left for me to do is to investigate the option of migrating to a RAID 0, and see if I can move to a RAID 6 from there. But I can’t find anything on the net about it.

The HP help desk seems to indicate that RAID 10 is a option for online migration.

RAID 5 with 10TB disks is extremely risky… a 40 TB loss is a lot of pain… but an entire RAID 5 loss resulting from a single read error when rebuilding is more painful.

1 Like

I don’t have high hopes for RAID0 to RAID6 and furthermore, that’s 2 very tricky rebuilds that could cause the data loss all by themselves.

At some point I feel like perhaps you should just remain on RAID5 for as long as it does survive and when you lose the node, start a new one on RAID6. It’ll suck to have to start over at some point. But at some point the amount of effort doesn’t really weigh up to getting rid of the RAID5 risk. And while losing your node is bad, it’s not mission critical data you’re losing.

You should definitely not rely on this array for any important personal data though.

If you have another system or a spare HDD, you could warm up a second node on a separate HDD. That way you can get that one vetted and have it go through the months of held back. If the RAID5 fails, you have a vetted node that makes full money to migrate to the new RAID6 when its ready.

Yes, the best option is for him to stay as he is and let time pass. If I find some scratches and a hard drive, I can make another node and that’s it. Thanks for your help!

1 Like

On a Windows server? yea. Be ReFS would be fine.

I doubt that very much, even without write (BBU) cache as the files are tiny, tiny, tiny received from the network.

1 Like

No, in the end I will leave it with the veteran NTFS, if I change the other file system I might have compatibility problems with some applications. That file system was incorporated in Windows Server 2008, if I remember correctly, very young.

you did catch that it is a 2.4tb node on a 70tb raid5 array right…
he can literally move it to one drive with no hazzle if he wanted…

ofc that doesn’t solve the io issues he might run into…
i would still say get it switched to raid6 instead of waiting… its never going to be easier to do than now…

disconnect a drive… format it, copy the node to it… and then one has 7 drives free to setup in a correct way and he can always expand on top the extra drive when he is done remaking it…

ofc even a 8 drive raid6 array only has iops like a single drive… so his raw writes are going to be terrible compared to his hardware.

1 Like

It’s way older. Windows NT 3.1 in 1993 :wink:

Yes, but I also caught that the server isn’t close by. And since raid controllers are basically built to never let an array degrade unless it has to, I doubt it’s going to be possible to remove that drive without physically yanking it. So I’m just trying to suggest an alternative.

That’s NTFS, @Robertomcat was referring to ReFS.

1 Like

Next Thursday I have to go to Valencia, I have the server, and I have to spend Thursday and Friday there, but it’s not going to give me time to make the copy to a disk, because I think it’s going to take a long time, right?

Well, that’s probably something you can do remotely just fine as long as that controller lets you configure a disk as a single disk volume. You can pull it out, plug it back in and configure it as such and start the copy while you’re there. Then remotely remove the RAID5 when it’s done and create a RAID6, copy it back and expand the RAID6 to the last HDD. That should work in theory.

If you want to do it quickly, it’s probably best to just stop the node and start the copy. Since your array will be degraded while you’re copying, it’s probably best not to put additional load on it anyway. And keep in mind that this is similar to the RAID rebuild… you may encounter a URE. Still, if it works, that’s the last time you need to worry about that.

What you say can’t be done. If I take the disc out, and put it back in, it automatically starts rebuilding the disc I took out. The disc cannot be configured in any way to look individual.

I would have to take out the disc, connect it through USB and make the copy.

2.- Delete raid 5, create raid 6 and pass the storj data from the USB to raid 6 again. All the other data I have in the server are synchronized in the cloud, and I could download them later.

3.- You will leave Raid 5.

Everything I mentioned in step 2 could do it, but I would not know how to calculate the time, and I am also worried about the disqualification of the node for inactivity. What do you think?

1 Like