I just setup a new node and it’s restarting after 11 seconds consistently.
Running sudo docker exec -it storagenode /app/dashboard.sh shows this error:
Last 20 lines from the logs using sudo docker logs --tail 20 storagenode
--- stat config/storage/temp: no such file or directory
--- stat config/storage/garbage: no such file or directory
--- stat config/storage/trash: no such file or directory
2023-11-30 02:04:33,262 INFO exited: storagenode (exit status 1; not expected)
2023-11-30 02:04:36,270 INFO spawned: 'storagenode' with pid 59
2023-11-30T02:04:36Z INFO Anonymized tracing enabled {"process": "storagenode"}
2023-11-30T02:04:36Z INFO Operator email {"process": "storagenode", "Address": "EMAIL"}
2023-11-30T02:04:36Z INFO Operator wallet {"process": "storagenode", "Address": "0xXXXXXXX"}
Error: Error starting master database on storagenode: group:
--- stat config/storage/blobs: no such file or directory
--- stat config/storage/temp: no such file or directory
--- stat config/storage/garbage: no such file or directory
--- stat config/storage/trash: no such file or directory
2023-11-30 02:04:36,367 INFO exited: storagenode (exit status 1; not expected)
2023-11-30 02:04:37,368 INFO gave up: storagenode entered FATAL state, too many start retries too quickly
2023-11-30 02:04:38,371 WARN received SIGQUIT indicating exit request
2023-11-30 02:04:38,372 INFO waiting for processes-exit-eventlistener, storagenode-updater to die
2023-11-30T02:04:38Z INFO Got a signal from the OS: "terminated" {"Process": "storagenode-updater"}
2023-11-30 02:04:38,380 INFO stopped: storagenode-updater (exit status 0)
2023-11-30 02:04:39,385 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)
This is what I meant, it’s already configured to be a static mounted. I don’t see any data related to storj at all really, I thought maybe it would be a compatibility issue. Does storj work on Debian 12 (Bookworm)? I see in the setup it says to use Debian 9 or 10.
So I fixed it by changing the local ip of the rpi device. It seems that maybe it was conflicting with the old node I had before (don’t have it anymore). I’m not sure why that would have conflicted if it didn’t exist anymore.
This is very unlikely. The error was about missing folders, not unavailability on the network level. You likely did something else, like run a SETUP step in the second time. You shouldn’t run it more than once for the entire node’s life, otherwise you may destroy it.
I would hope, that this is a new node which was never run before, and you provided a correct path for data otherwise. But in the latter case there is still a question - why these folders were missing. This again points to think about wrongly provided path for data.
I’m not entirely sure because I was careful and did everything correctly. I ran the setup once, provided correct paths, I double checked everything I was running/typing, etc.
When I changed the rpi’s up, I also reinstalled the OS and redid the entire process but kept the verified identity files (from a backup). I did the EXACT same thing throughout this setup process as before. The only differences is that the device ip is different and the NOIP address is also different.
I did NOT execute the SETUP command again, I erased and installed debian on the primary drive again and redid the entire step by step guide again (Quickstart Node Setup - Storj Docs).
But as you asked, here are the following:
ls -l /mnt/drive1/data
total 48
-rw------- 1 pi pi 10775 Nov 30 07:10 config.yaml
drwx------ 4 pi pi 4096 Nov 30 07:11 orders
-rw------- 1 pi pi 32768 Nov 30 07:11 revocations.db
drwx------ 6 pi pi 4096 Nov 30 08:01 storage
-rw------- 1 pi pi 933 Nov 30 07:11 trust-cache.json
ls -l /mnt/drive1/data/storage
total 8156
-rw-r--r-- 1 pi pi 856064 Nov 30 08:03 bandwidth.db
-rw-r--r-- 1 pi pi 32768 Nov 30 08:04 bandwidth.db-shm
-rw-r--r-- 1 pi pi 4181832 Nov 30 08:04 bandwidth.db-wal
drwx------ 5 pi pi 4096 Nov 30 07:12 blobs
drwx------ 2 pi pi 4096 Nov 30 07:10 garbage
-rw-r--r-- 1 pi pi 32768 Nov 30 07:41 heldamount.db
-rw-r--r-- 1 pi pi 32768 Nov 30 08:00 heldamount.db-shm
-rw-r--r-- 1 pi pi 0 Nov 30 08:00 heldamount.db-wal
-rw-r--r-- 1 pi pi 16384 Nov 30 07:41 info.db
-rw-r--r-- 1 pi pi 24576 Nov 30 07:41 notifications.db
-rw-r--r-- 1 pi pi 32768 Nov 30 08:00 notifications.db-shm
-rw-r--r-- 1 pi pi 0 Nov 30 08:00 notifications.db-wal
-rw-r--r-- 1 pi pi 32768 Nov 30 07:41 orders.db
-rw-r--r-- 1 pi pi 32768 Nov 30 07:41 orders.db-shm
-rw-r--r-- 1 pi pi 0 Nov 30 07:41 orders.db-wal
-rw-r--r-- 1 pi pi 69632 Nov 30 07:41 piece_expiration.db
-rw-r--r-- 1 pi pi 32768 Nov 30 08:04 piece_expiration.db-shm
-rw-r--r-- 1 pi pi 2607992 Nov 30 08:04 piece_expiration.db-wal
-rw-r--r-- 1 pi pi 24576 Nov 30 07:41 pieceinfo.db
-rw-r--r-- 1 pi pi 24576 Nov 30 07:41 piece_spaced_used.db
-rw-r--r-- 1 pi pi 24576 Nov 30 07:41 pricing.db
-rw-r--r-- 1 pi pi 32768 Nov 30 08:00 pricing.db-shm
-rw-r--r-- 1 pi pi 0 Nov 30 08:00 pricing.db-wal
-rw-r--r-- 1 pi pi 24576 Nov 30 07:41 reputation.db
-rw-r--r-- 1 pi pi 32768 Nov 30 08:00 reputation.db-shm
-rw-r--r-- 1 pi pi 0 Nov 30 08:00 reputation.db-wal
-rw-r--r-- 1 pi pi 32768 Nov 30 07:41 satellites.db
-rw-r--r-- 1 pi pi 32768 Nov 30 07:41 satellites.db-shm
-rw-r--r-- 1 pi pi 0 Nov 30 07:41 satellites.db-wal
-rw-r--r-- 1 pi pi 24576 Nov 30 07:41 secret.db
-rw-r--r-- 1 pi pi 32 Nov 30 07:10 storage-dir-verification
-rw-r--r-- 1 pi pi 24576 Nov 30 07:41 storage_usage.db
-rw-r--r-- 1 pi pi 32768 Nov 30 08:00 storage_usage.db-shm
-rw-r--r-- 1 pi pi 0 Nov 30 08:00 storage_usage.db-wal
drwx------ 2 pi pi 4096 Nov 30 08:04 temp
drwx------ 2 pi pi 4096 Nov 30 07:10 trash
-rw-r--r-- 1 pi pi 20480 Nov 30 07:41 used_serial.db
Yes, I am happy it is up and working, though I am upset that I made a silly mistake and had to start all over.
See, I needed to reconfigure NOIP due to an Internet switch but I was tired and accidentally mistyped and deleted the root of my system with sudo rm -r ~/*. I lost everything.
What can be backed up and restored in the even of any future failure? Identity files? An entire system image?
rm -r ~/* doesn’t delete the root file system, although it deletes the home of the root.
As far as I could see, you shouldn’t be harmed. Because your mounted it at /mnt-subvolume.