Moved storagenode drive from one system to another

I moved my drive from a raspberry pi to a linux server, where I have two other nodes running. I did the migration process as described in the docs, minus the rsync part since I simply moved the drive and identity files over.

When I start it back up though I’m getting this error… help! :slight_smile:

2020-07-21T15:40:19.922Z INFO db.migration Database Version {“version”: 39}
Error: Error during preflight check for storagenode databases: storage node preflight database error: reputation: expected schema does not match actual: &dbschema.Schema{
Tables: *dbschema.Table{
&{
Name: “reputation”,
Columns: *dbschema.Column{
… // 5 identical elements
&{Name: “audit_unknown_reputation_alpha”, Type: “REAL”},
&{Name: “audit_unknown_reputation_beta”, Type: “REAL”},

Also, I just followed the instructions for checking my DBs, and I got OK on all of them

./piece_expiration.dbok

./notifications.dbok

./satellites.dbok

./heldamount.dbok

./pricing.dbok

./orders.dbok

./pieceinfo.dbok

./piece_spaced_used.dbok

./bandwidth.dbok

./info.dbok

./reputation.dbok

./storage_usage.dbok

./used_serial.dbok

Based on this thread, it’s looking like my reputation database is what’s messed up, but I don’t want to risk trying anything in this thread until I know if that’s really the issue… so help would be appreciated, thanks!!!

@Alexey - you’re likely going to be the one that will know what to do, so I’m pinging you. :grin: :wave:

Have you double (or triple) checked that you are not reusing/duplicating paths between your migrated node and existing nodes. Also check that the node software version that was on your raspberry pi is the same as the node version on the linux server?

Definitely checked paths in the docker run statement, they look right.

Somehow in the process of shutting down my Pi and moving the drive, I messed it up and now I can’t get the Pi to boot properly… so I can’t get the version it was running.

But I was running Watchtower, and I’m pretty sure everything was the latest version. So I wouldn’t think that the new server is running a different version.

Did you take ownership of all the files on the migrated drive? Perhaps it’s a permissions issue.

OH good idea. Checking

I did this on my entire drive’s mounted folder:

sudo chown -R <username> /mnt/<drive>/

Same errors still after restarting the storagenode

Not entirely sure, but you may need to change the group as well? What does ls -l show for the db files?

-rwxrwxrwx 1 admin users   2834432 Jul 21 13:52 bandwidth.db
-rw-r--r-- 1 admin root      32768 Jul 21 13:51 heldamount.db
-rwxrwxrwx 1 admin users     16384 Jul 21 13:48 info.db
-rwxrwxrwx 1 admin users     24576 Jul 21 13:51 notifications.db
-rwxrwxrwx 1 admin users 114311168 Jul 21 13:52 orders.db
-rwxrwxrwx 1 admin users     36864 Jul 21 13:52 piece_expiration.db
-rwxrwxrwx 1 admin users     24576 Jul 21 13:52 pieceinfo.db
-rwxrwxrwx 1 admin users     24576 Jul 21 13:52 piece_spaced_used.db
-rw-r--r-- 1 admin root      24576 Jul 21 13:45 pricing.db
-rwxrwxrwx 1 admin users     24576 Jul 21 11:09 reputation.db
-rwxrwxrwx 1 admin users     32768 Jul 21 11:09 satellites.db
-rwxrwxrwx 1 admin users    143360 Jul 21 13:48 storage_usage.db
-rwxrwxrwx 1 admin users  20762624 Jul 21 11:09 used_serial.db

Try setting the owner to admin and group to root. That is how it appears on my drive. New db’s created by the node as they have been added to the software seem to set the group to root (as it appears happened with two of your dbs).

Done.

Same issues still. It really does seem like maybe there’s a storagenode version problem…

I think that would only be a problem if you migrate a newer version node into an older version (i.e. your linux server is out of date). If the raspberry node was out of date, it should have updated dbs on your linux version as soon as you started it. Did you do a fresh pull of the container image from docker before starting the migrated node?

At this point I would also run fsck on the migrated disk to check for errors. That’s about all I can think of for now. Good luck!

Looks like I’ve already got the latest:

latest: Pulling from storjlabs/storagenode
Digest: sha256:ac50dfb6f5122fe656cef381a7a9eecbcf45b307ee95043e34a7bbb0de8b9a84
Status: Downloaded newer image for storjlabs/storagenode:latest
docker.io/storjlabs/storagenode:latest

That message suggests that the image you just pulled was newer than what docker had in it’s cache. I would try rm’ing and recreating the container for the migrated node now that the newer image has been downloaded.

1 Like

oh dang, yes, I misread it. You’re right! Sorry

Trying that now

oh man, you’re a lifesaver. I’m up and running now!

Phew.

So here’s some feedback then on this doc, and this is probably something that should be obvious to everyone, but it wasn’t to me.

https://documentation.storj.io/resources/faq/migrate-my-node?_ga=2.199278702.1741034678.1593126999-1501811137.1593126999

Before Step 9, add a step to run a pull to make sure you have the latest version of the container before you do the Docker Run command on the new machine!!!

1 Like

Glad we got it sorted! The devil is always in the details. I’m curious, do you have watchtower running on the linux server? I would have also expected that if watchtower is running, the latest image would be in the cache. But perhaps it doesn’t work that way with watchtower.

I do have Watchtower running, so that’s interesting…