What you did not lose was uptime score and a bit of time, not node itself. And if you have a hardware raid controller, then there is a huge chance that controller decided to drop the drive because of a couple of read errors or something.
Actually in this case the RAID1 that had the issues was the boot drive volume. So, the OS itself plus the docker containers. The storj data itself is on a separate, larger RAID1 volume and uses drives I had purchased new. That volume is currently without error. So, had I lost the OS volume I would have indeed lost the node itself.
Well, if storj node data resides on a separate volume then your node was not at risk at all. The only thing that is not stored in storagenode folder by default is identity, but documentation tells you to back it up and nothing really stops you from storing identity in a different location.
I have moved a storagenode to ceph/rbd and it seems to be working. I chose a replicated pool rather than erasure code pool.