Trying to migration from on old HP server to a Raspberry Pi4

Hello everyone,

I need your help with a migration issue. I’m trying to migrate my three nodes, currently on 3 disks in an HP server, to a Raspberry Pi 4 equipped with a USB HDD dock to reduce power consumption.

On the HP server, I’m running Debian 12 and my nodes are running in a Docker container.
On the Raspberry Pi, I’m running Raspbian where I already have other Docker containers running without issues.
I have disconnected and reattached my disks to the Raspberry Pi – no problem accessing the files.
I have also copied the identity files to the correct folder.

However, when I try to launch the Docker container, I get the error below on a loop. Could this be because I’m moving from the x86 architecture of the HP server to the ARM architecture of the Raspberry Pi?

2024-10-13 16:08:00,704 WARN exited: storagenode-updater (exit status 127; not expected)
2024-10-13 16:08:01,705 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-13 16:08:01,709 INFO spawned: 'storagenode' with pid 20
2024-10-13 16:08:01,713 INFO spawned: 'storagenode-updater' with pid 21
supervisor: couldn't exec /app/bin/storagenode: ENOEXEC
supervisor: child process was not spawned
2024-10-13 16:08:01,728 WARN exited: storagenode (exit status 127; not expected)
supervisor: couldn't exec /app/bin/storagenode-updater: ENOEXEC
supervisor: child process was not spawned
2024-10-13 16:08:01,730 WARN exited: storagenode-updater (exit status 127; not expected)
2024-10-13 16:08:03,736 INFO spawned: 'storagenode' with pid 22
2024-10-13 16:08:03,741 INFO spawned: 'storagenode-updater' with pid 23
supervisor: couldn't exec /app/bin/storagenode: ENOEXEC
supervisor: child process was not spawned
2024-10-13 16:08:03,754 WARN exited: storagenode (exit status 127; not expected)
supervisor: couldn't exec /app/bin/storagenode-updater: ENOEXEC
supervisor: child process was not spawned
2024-10-13 16:08:03,759 WARN exited: storagenode-updater (exit status 127; not expected)
2024-10-13 16:08:06,770 INFO spawned: 'storagenode' with pid 24
2024-10-13 16:08:06,778 INFO spawned: 'storagenode-updater' with pid 25
supervisor: couldn't exec /app/bin/storagenode: ENOEXEC
supervisor: child process was not spawned
2024-10-13 16:08:06,803 WARN exited: storagenode (exit status 127; not expected)
supervisor: couldn't exec /app/bin/storagenode-updater: ENOEXEC
supervisor: child process was not spawned
2024-10-13 16:08:06,807 INFO gave up: storagenode entered FATAL state, too many start retries too quickly
2024-10-13 16:08:06,812 WARN exited: storagenode-updater (exit status 127; not expected)
2024-10-13 16:08:07,814 INFO gave up: storagenode-updater entered FATAL state, too many start retries too quickly
2024-10-13 16:08:07,815 WARN received SIGQUIT indicating exit request
2024-10-13 16:08:07,815 INFO waiting for processes-exit-eventlistener to die
 2024-10-13 16:08:09,821 WARN stopped: processes-exit-eventlistener (terminated by SIGTERM)

Thanks !

Aurelien

You seem to have mounted the disks with “noexec”.

What’s the output of mount and what’s the content of /etc/fstab?

Hello @Washu,
Welcome back!

You need to remove the bin subfolder in the storage location of each disk and restart the container, it will re-download a correct binaries for your current platform.

1 Like

Hmm yes, storage node disks (mounted under /app/config) used to be architecture independent until the recent changes.
Perhaps the binaries in /app/config/bin should have an architecture suffix so if a disk is migrated to a different architecture the current one is noted as missing, downloaded and copied to /app/bin (without a suffix).
I know moving architectures is done rarely, but probably often enough to be worth the extra robustness.

1 Like

I do not think that it’s worth it. Removing the folder or binaries out of it is a one time action.

I disagree. Data shall be portable. It already is.

Now that executables were made part of data, they too shall be portable. Suffixes is just as good way to accomplish this as any.

However the better solution is not to store executables with data in the first place. There is no need. It’s sufficient to store last seen minimum version, in the text file, and have updater download storagenode of no older than that version.

This is much simpler soliton than having to copy files around and tying data to machine architecture — the very thing containers are supposed to abstract. (This approach was suggested before, but for some reason the inferior design of copying executables was selected).

Basically, now it’s broken, and needs to be fixed.

Thank you all,

Finaly I’ve solved my problem by deleting the bin directory and the container.

Everything looks ok and I’ve reduced my power consumption by 100W !
Let’s now see if the raspberry is stable over time.

2 Likes

It would probably make sense to have this step mentioned in this page.

2 Likes
1 Like

Hi, I need help.
After updating Qnap’s Container Station this error came out and my two nodes no longer work. The nodes tries to restart but fails.

At first it gave an error app/bin/ something…
I recreated it but nothing. I threw away the bin folder

2024-12-07T15:46:58Z    ERROR   failure during run      {"Process": "storagenode", "error": "Error opening revocation database: revocation database: boltdb: timeout\n\tstorj.io/storj/private/kvstore/boltdb.New:43\n\tstorj.io/storj/private/revocation.openDBBolt:52\n\tstorj.io/storj/private/revocation.OpenDB:35\n\tstorj.io/storj/private/revocation.OpenDBFromCfg:23\n\tmain.cmdRun:76\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271", "errorVerbose": "Error opening revocation database: revocation database: boltdb: timeout\n\tstorj.io/storj/private/kvstore/boltdb.New:43\n\tstorj.io/storj/private/revocation.openDBBolt:52\n\tstorj.io/storj/private/revocation.OpenDB:35\n\tstorj.io/storj/private/revocation.OpenDBFromCfg:23\n\tmain.cmdRun:76\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271\n\tmain.cmdRun:78\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tmain.main:34\n\truntime.main:271"}
Error: Error opening revocation database: revocation database: boltdb: timeout
       storj.io/storj/private/kvstore/boltdb.New:43
       storj.io/storj/private/revocation.openDBBolt:52
       storj.io/storj/private/revocation.OpenDB:35
       storj.io/storj/private/revocation.OpenDBFromCfg:23
       main.cmdRun:76
       main.newRunCmd.func1:33
       storj.io/common/process.cleanup.func1.4:392
       storj.io/common/process.cleanup.func1:410
       github.com/spf13/cobra.(*Command).execute:983
       github.com/spf13/cobra.(*Command).ExecuteC:1115
       github.com/spf13/cobra.(*Command).Execute:1039
       storj.io/common/process.ExecWithCustomOptions:112
       main.main:34
       runtime.main:271
2024-12-07 15:46:58,706 WARN exited: storagenode (exit status 1; not expected)
2024-12-07T16:01:02Z    ERROR   Error retrieving version info.  {"Process": "storagenode-updater", "error": "version checker client: Get \"https://version.storj.io\": context canceled", "errorVerbose": "version checker client: Get \"https://version.storj.io\": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tmain.loopFunc:20\n\tstorj.io/common/sync2.(*Cycle).Run:102\n\tmain.cmdRun:139\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:983\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1115\n\tgithub.com/spf13/cobra.(*Command).Execute:1039\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tmain.main:22\n\truntime.main:271"}
2024-12-07 16:01:02,961 INFO stopped: storagenode-updater (exit status 0)
2024-12-07 16:01:03,344 WARN stopped: storagenode (terminated by SIGTERM)
2024-12-07 16:01:03,345 WARN stopped: processes-exit-eventlistener (terminated by SIGTERM)
2024-12-07 16:02:58,172 INFO Set uid to user 0 succeeded

I also threw away the filestatecache folder, but nothing

Read the error message, it does not talk about file state cache folder. Why would you think it’s a file state cache?

Please format logs as logs, it’s very hard to tread them in copy text.

1 Like

Sorry but I’m on Mac and I am not able

Error: Error opening database on storagenode: Cannot acquire directory lock on "config/storage/filestatcache".  Another process is using this Badger database. error: resource temporarily unavailable 

that’s why I threw away the filestatecache

P.S. Found a way to format

Is this is the very first error? Are you sure no other processes have the database open? Because the error message says otherwise.

1 Like

Please check for “Unrecoverable” and/or “FATAL” errors, all other errors may be the result of a service stop.

The first error was something like: txt … app/bin/ but I didn’t note it down, my fault, I thought it was related to the Bin subfolder, so I stopped the node, removed it, trashed the bin folder and recreated it. At this point the Badger error came out. I solved one problem by also trashing the Filesatecache folder, and after a search on the forum, I deleted revocations.db and it started normally.
Same procedure for the other node and it has started working again

2 Likes