Node disk cames to RO state, node offline

according to:
https://serverfault.com/questions/141122/fsck-file-system-was-modified-after-each-check-with-c-why
The bad blocks list is updated any time badblocks (or fsck -c) runs.
So it looks like it never ending cycle with FS modified.

In the end of countless fsck and then badblocks the FS of HDD died with this message:

mount: wrong fs type, bad option, bad superblock on /dev/sde,
       missing codepage or helper program, or other error

And can’t mount to Linux to move the data from disk.
I’ve tried to recover superblock with e2fsck -f -b #N -y /dev/sde but all invain, FS died.

Now I have only DBs on the other disk and auth token in txt in hand without data, identity, etc.
So I’m thinking it’s not enough to recover the node.

Looks I’ve made a major mistake in the beginning - I’ve rolled up gpart on disk /sde but the partition no had created or deleted somehow. Maybe because of:

gdisk /dev/sde
mkfs -t ext4 /dev/sde 
instead of /dev/sde1

So I’ve moved the files on a disk not on a partition after overlaying this two commands.
Luckily it was a testing node with about 100Gb files.

Knowing this, I’m moving the nodes from an other wrongly created disks to avoid the same mistake in a future.

Unfortunately - this is a root cause…
Sorry for your loss.
You may use LVM instead, there is much less to do a mistake. Or ZFS.

Oh, it’s just my stupid mistake and I’m trying now to worry.
I’ve too much time for overthinking and doubt so I’ve made a decision to move forward wherever it takes.
So there is nothing to sorry about - nothing happens without efforts and mistakes:)

There is 2 nodes for sync and 2 nodes for resync and I have not so much time till the end of the year.

There is my goal - to end this proccess in this year and build up PC with released parts to my son.

So no LVM or ZFS for now, just to make things fast and avoid mistakes.