RAID6 degraded (ubuntu mdadm), node is down


Today decided to add 5th drive to the RAID6 array for storage expansion. The disk appeared in fdisk -l, also i double checked the state of raid after booting with extra drive, and it was clean with 4 drives. i’ve created needed partitions on the new drive, added to the mdadm. At this stage 5th drive suppose to stay as “spare”, but i’ve noticed very strange situation. Instead of 4 healthy drives and 5th as a spare, i found that 3 drives are healthy, 4th drive is re syncing after degradation, while 5th drive (not the one i’ve added) is faulty. Would be not that bad, but… for doing this upgrade i took the server out of it’s place and lie on the desk. Before leaving server room, i accidentally moved a bit server case a few centimetres (as i left it on the desk) and power cord popped out…
Now i have “error: ELF header smaller than expected, entering rescue mode”.
I’ve tried over the grub made these steps:

set prefix=(md/1)/boot/grub
set root=(md/1)
insmod normal (after this command i get: ELF header smaller than expected)

Then i’ve tried “Boot repair”, but it not helped as well.

This is boot repair info:

Any ideas for this repair are very welcome.

Thank you.

This is not related to the storagenode, it’s Linux-specific.
As suggested there: you should switch the BIOS to the legacy mode without security boot before trying to fix the boot.
Also, the utility boot-repair suggested you to fix the filesystem as the next step.

I do not have an experience with such case, so cannot help to fix it. I recommend you to wait an answer to your question on askubuntu

Thank you Alexey, trying to find answer at askubuntu.