HDD Failure on Defrag

I had an almost full 4TB node that was just starting to finish vetting on multiple satellites. I enjoyed having my node. I checked the dashboard and logs regularly, I was expecting $5 this month. It was a fun little project. I was planning on expanding by purchasing a 12TB disk (not expecting ROI) and mirroring over the contents.

I made the mistake of trying to defragment the disk last night while the node was running. I was attempting to minimize “database locked” errors. I’m sad to report my drive has died this morning. Maybe it would have died soon anyway. It’s fine, I have a new identity and will start over when my new disk arrives. Just wanted to vent a little.

Welcome to the forum @wyrm!

Its sad that you lost your node. How old was your node ?

Just a little over a month. The drive itself was a few years old, I used it on the old Storj network way back when, so it had a good run I think.

Make sure you read this before you start using your new drive

When you are all set you can post picture of your setup here if you want.

Thank you for the information. The drive I just lost was SMR and i’m sure that was a factor in its failure.


Good news! After showing up in the disk partition utility as “disk unknown, not initialized, 0 bytes,” and saying “device does not exist” when trying to initialize it, I had written it off, and was going to open it up for fun to look at the platters. On a whim I plugged it back into the computer just now, and it works fine? Very strange, but I’m happy I can at least recover the movies and games I had stored on it.

1 Like

I’m new to storj, I started a node 3 weeks ago, and this weekend the drive had been disconnected, I had to reboot the raspberry to get it available again, nothing was lost (but a satellite DQed it).
the drive was SMR, I’m almost sure it took too much time to respond and was disconnected my the kernel (I put some logs on another thread) and then the node could not retrieve data from the drive so the DQ happened …

I’m not surprised you can access your disk again, but I guess it can’t handle the workload from a node as some internal process due to it beeing “SMR” make it “lag” for too long

1 Like