there it seems like you have to go back to raid 0 and then change from raid 0 into raid6
try that it should be pretty harmless and assume you have no major problems with the server and everything is in good condition.
because you will go down to having literally no redundancy for maybe a day or two
but raid 6 is so much safer and better than it would be worth the effort i think
ofc it sort of depends on the value of the data existing on the array.
In a week or less, I will mount this same controller in this same server, and I will incorporate 4 Ć 4 TB, then I will check if the option of being able to make a RAID 6 comes out.
I think that guy has stuck to the document, and the BIOS are always being updated to incorporate new features (such as being able to share a dedicated graphics card and have the remote iLO console working at the same time).
Yes, it is a good controller, and it has its battery directly connected to the motherboard, and from the motherboard comes another cable to connect to the RAM.
Iāll see if I can check it out on another server soon. Thank you for your answers.
if the array is below 10 tb in size you can always pull a drive, migrate to that from the pool and then go from there⦠that gives you 7 free drives⦠mirrors are very easy to work with⦠raid with high numbers of drives are quite IO limited⦠ofc the 4gb cache helps with that alotā¦
Except for the storj data, I have the rest of the data backed up. The idea you gave me, I had thought about it, but I directly discarded it, because I thought that from RAID 0 I would not be able to go back to RAID 6 and I would have to go back to RAID 5.
That thing youāre asking me to do is like riding a bike along the ledge of a 50-story building hahaha
Well, seriously. The server is now running properly without any problems, and you could start passing it to a RAID 0 right now.
If I take a hard drive out of the RAID, until I incorporate it back in and rebuild everything, itās not going to be able to do anything. It is better the first idea to move to a RAID 0. I should make sure first if you can move from RAID 0 to 6.
kinda made the assumption that he didnāt have any drives to spare since else we would have done that⦠but yeahā¦
took me way to long to copy my 9 tb node last time⦠like 5 days⦠ofc i had a bad drive in the mix and maybe some misaligned drives⦠last time it managed in about 2 days⦠which isnāt to bad⦠that was from 2 drives in a span⦠not sure if a zfs span and a regular span are the same tho⦠or even closeā¦
im sure heāll be fine no matter what he does⦠its about getting onto a raid6 before he ends up with a node that takes up most of the drives and locks in into his configuration.
copying from 7 drives into 1 isnāt much strain on the 7⦠and the 1 drive⦠well usually hddās start acting up before they go bad⦠and even if it died he would still have a copy of the nodeā¦
and copying it back⦠well most drives are rated to like 200tb - 550tb throughput ⦠so unless if its smr drives, then there shouldbāt be a problem⦠and computers all over the world run with that ever day and copy all the time⦠ofc not in this scale, but still should be fairly straight forward operation for only mildly critical dataā¦
at worst he loses a node⦠and if it works he is safe on that array for a decade so long as broken drives gets replaced in proper mannerā¦
also minor errors would be correctable or irrelevant because one piece or two or 10 will most likely not DQ his nodeā¦
so from my perspective its barely even risky⦠ofc if the data was worth a 100k$ or 1m$ then it would be risky⦠the risk vs the reward would be low⦠because for 1% of the value he could buy a drive to transfer it to⦠but with it being like maybe 50$ and ofc wait time⦠then even a 10% risk of failure⦠well you run a 10% change of failure 30 times in a row and it might still not hitā¦
donāt worry itās fine⦠nah iām more worried about your future io when you end up with 60tb of data on an array⦠because you still only have 1 hdd worth of iopsā¦even with raid6
is actually a very good suggestion⦠i think i also ran into that issues at one point.
ofc by now if you started the other process it more important just to push through without changing to muchā¦
Making a backup of the node is somewhat complicated, because in addition to having to be offline for many hours, there is always a risk that some data will be missing, and therefore I will be disqualified. Not that I have a very old node, but I already have 2.3 TB of data.
No, I donāt have any more units to spare, thereās the whole server.
All the information I have in the raid, is replicated in a cloud backup, so I donāt have any other disk to be able to move data. I synchronize the backup from time to time, but only the data that is not from storj, from storj I donāt have a backup from anywhere.
Well, currently no more disks can be added, because the server only has two cages, and each cage has four disks and they are all full.
i donāt think john means an actual spare⦠but reducing the raid5 from 8 to 7 drives and then making the extra drive a spare drive in the array which then may unlock the raid6 optionā¦
sometimes reconfiguration can be a bit convoluted.
Thatās impossible. When I remove a disk from RAID5, the controller will only let me add a disk, without giving me any other migration options or anything else. You will only want one disk to recover as soon as possible.
just pull the redundant drive and use that as storage⦠its only a few tb⦠wonāt take long at all⦠and if you donāt do it now then you will regret it ⦠my node was like 6tb or 5tb when i started to attempt to migrate it⦠and tried once before without much luck⦠this last time it was 10tb by the time i was done⦠i barely was able to squeeze into 2 x 6tb drives in a span⦠and then i had to copy it all back again afterwards and add the two drives back into the array⦠ended up being a 14 long operation, for various reasonsā¦
Right now Iām not physically in front of the server, Iām 50 miles away and it would still take time to go. Couldnāt you back up to the cloud?
Iāll tell you the steps I think you should take:
1.- I leave the node in place, and at the same time make the backup in the cloud. Once all the data has been uploaded, turn off the node and copy it again, so that the data that has been updated/deleted is synchronized again.
2.- From the Windows server services, I indicate that the node will not start up at any moment until I make the new RAID 6 (that possibly takes a day to make that new array, and due to the time the satellites will disqualify me).
3.- Once the RAID 6 is done, I will indicate the same path that the storj data folder had before, download everything and start the node again.
Pass all RAID 5 to 0, and see if you can make a RAID 6 from there. The link you sent me yesterday, I donāt see very clear what can be done. What do you think about this last point?
sure if you got the bandwidth and the capacity to store it onlineā¦
i would do it locally ⦠ofc it would be annoying to go 50miles to fix the server if it broke remotelyā¦
but i would force 1 drive out of the arrayā¦(you should be able to do this remotely) then create a new partition on that and just copy it over to that drive⦠i donāt assume you boot on the raid5
but if you have like 400-600mbit internet⦠then thats like close to hdd write speeds anyways⦠so not much difference ⦠sure maybe a few times slower⦠but the range is also like 1000-100000times greater than local transmission.
you should go with the solution you think makes the best sense⦠and you are the one best in the position to do so⦠i donāt know if you internet is weird at times⦠or how the quirks of your raid controller software or server areā¦
i donāt think you should do anything you donāt think makes sense⦠then you might be better off waiting⦠its a raid5 after all⦠can most likely run for years without issueā¦
besides the fix we are trying to do only fixes the raid5⦠you might need more iops also⦠which is a whole new can of wormsā¦
uploading it and downloading it i wouldnāt do⦠then i would most likely just wait until i had a better idea just about how i really wanted my array to be⦠because you might quickly end up with a node that is much larger than is practical to shuffle around.
What you say about being able to remove a disk without having to physically remove it, is impossible, there is no option to be able to do that.
The only thing left is to investigate more about the migration from a RAID 0 to 6, but I understand that it will not be possible to do it, because it should give option to do it from a 5, thatās what I think.
I donāt have any experience with your controller, but I know that with mdraid the only way to upgrade to RAID6 is by adding a disk. Iām pretty sure it would be the same for you. Itās also not possible to remove an HDD from the array usually. It seems converting to RAID0 would only increase the problem, because now you would have to add 2 disks to go to RAID6.
I know that mdraid isnāt the same as your RAID controller, but I wouldnāt be surprised if the same limitations apply.
Yes, I think the same thing, I think it will not be possible to migrate to a RAID 6 without the addition of another disk.
Currently I cannot add another one, and I would only have to destroy RAID 5 and create a 6, but I would not like to lose the month and a half of antiquity of the node, and the 2.3 TB of data.
Have you read the options the partners and I have put up?
And surely disqualification is imminent, but I see that I have no choice
A month ago, you and I were already talking about the same subject and the problems of my configuration. Letās see if I can work this out once and for all. Thank you all!