Node turns off, QUIC misconfigured and restart

There is no identity in the default path for the user root.
You either should not run it under root, or need to provide a path to the folder, where is folder with identity files located, i.e.

identity authorize --identity-dir /mnt/storj/storagenode1 identity youremai@address:yourremainingtoken

here /mnt/storj - is a mount point, and your file tree is like this:

/mnt/storj
          |- storagenode1
          |          |- identity
          |          |- storage
          |          |        |- blobs
          |          |        |- trash
....
          |          |- config.yaml
....

So, in short

identity authorize --identity-dir <path to the folder with folder where your identity is located> <name of the folder with the identity files> <authorization token, include email>
1 Like

Now i got this

root@raspberrypi:/# identity authorize --identity-dir /home/storagewars/.local/share/storj/identity/ storagenode1 unofficialolym@gmail.com:12PFwDy9CpGSaEKAPcGUvRBE4u9mkqW6YT7Asrn47bXXDqUamyTu4uiAR8t9UJ7RiCLJwYgKHTS7mEDS8oPjnRpNRYqvhh
2024/01/21 10:32:29 proto: duplicate proto type registered: node.SigningRequest
2024/01/21 10:32:29 proto: duplicate proto type registered: node.SigningResponse
2024-01-21T10:32:29Z	INFO	Anonymized tracing enabled
Error: error creating revocation database: revocation database: boltdb: open /root/.local/share/storj/identity/revocations.db: no such file or directory
	storj.io/storj/private/kvstore/boltdb.New:42
	storj.io/storj/private/revocation.openDBBolt:52
	storj.io/storj/private/revocation.OpenDB:35
	storj.io/storj/private/revocation.OpenDBFromCfg:23
	main.cmdAuthorize:192
	storj.io/private/process.cleanup.func1.4:399
	storj.io/private/process.cleanup.func1:417
	github.com/spf13/cobra.(*Command).execute:852
	github.com/spf13/cobra.(*Command).ExecuteC:960
	github.com/spf13/cobra.(*Command).Execute:897
	storj.io/private/process.ExecWithCustomOptions:113
	storj.io/private/process.ExecWithCustomConfigAndLogger:79
	storj.io/private/process.ExecWithCustomConfig:74
	storj.io/private/process.Exec:64
	main.main:84
	runtime.main:250

If you still use sudo, you also need to specify a --config-dir option to specify, where this DB will be used/created. The folder must be exsist.

Still got this
storagewars@raspberrypi:~ $ identity authorize --identity-dir /home/storagewars/Hds/HD1/ID storagenode1 unofficialolym@gmail.com:12PFwDy9CpGSaEKAPcGUvRBE4u9mkqW6YT7Asrn47bXXDqUamyTu4uiAR8t9UJ7RiCLJwYgKHTS7mEDS8oPjnRpNRYqvhh 2024/01/21 14:58:14 proto: duplicate proto type registered: node.SigningRequest 2024/01/21 14:58:14 proto: duplicate proto type registered: node.SigningResponse 2024-01-21T14:58:14Z INFO Anonymized tracing enabled Identity successfully authorized using single use authorization token. Please back-up "/home/storagewars/Hds/HD1/ID/storagenode1" to a safe location. storagewars@raspberrypi:~ $ grep -c begin /home/storagewars/Hds/HD1/ID/storagenode1/ca.cert 0 storagewars@raspberrypi:~ $ grep -c begin /home/storagewars/Hds/HD1/ID/storagenode1/identity.cert 0 storagewars@raspberrypi:~ $ storagewars@raspberrypi:~ $ grep -c BEGIN /home/storagewars/Hds/HD1/ID/storagenode1/identity.cert 3 storagewars@raspberrypi:~ $ grep -c BEGIN /home/storagewars/Hds/HD1/ID/storagenode1/ca.cert 2 storagewars@raspberrypi:~ $
don’t know what to do next!

Not Working… Not Working… Not Working… if this disk blows up… i will eventually leave the other one until it goes! The next disk thankyou and goodbye :sweat_smile:

Error: Error starting master database on storagenode: group:
--- stat config/storage/blobs: no such file or directory
--- stat config/storage/temp: no such file or directory
--- stat config/storage/garbage: no such file or directory
--- stat config/storage/trash: no such file or directory
2024-01-21 16:42:37,917 INFO exited: storagenode (exit status 1; not expected)
2024-01-21 16:42:38,919 INFO gave up: storagenode entered FATAL state, too many start retries too quickly
2024-01-21 16:42:39,922 WARN received SIGQUIT indicating exit request
2024-01-21 16:42:39,923 INFO waiting for processes-exit-eventlistener, storagenode-updater to die
2024-01-21T16:42:39Z	INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode-updater"}
2024-01-21 16:42:39,930 INFO stopped: storagenode-updater (exit status 0)
storagewars@raspberrypi:~ $ 

Looks like it’s finished?
Then backup it and start the node. I would recommend to move this identity to the disk with data, to keep it together.

you need to setup the node, it should be done once for this identity.

I’ve deleted the previous node! i’m using this command
sudo docker run -d --restart always --stop-timeout 300 -p 28968:28967/tcp -p 28968:28967/udp -p 127.0.0.1:14003:14002 -e WALLET="0x20ba0ed29b38f63cfe96193b1e85365821a7058a" -e EMAIL="unofficialolym@gmail.com" -e ADDRESS="storagewarz.ddns.net:28968" -e STORAGE="1.7TB" --memory=800m --log-opt max-size=50m --log-opt max-file=10 --mount type=bind,source=/home/storagewars/Hds/HD1/ID/storagenode1,destination=/app/identity --mount type=bind,source=/home/storagewars/Hds/HD1,destination=/app/config --name storagenode storjlabs/storagenode:latest --operator.wallet-features=zksync
i’m don’t know i’m i’m still getting same error!

Think i found the problem, after running port checker found that port is closed!

You need to setup this node. To do so, you need to execute the setup step once: Storage Node - Storj Docs

if the node is not running, the port will be closed.
Please execute the setup step once, then run your normal docker run command, as you posted above.

1 Like

I think it’s working!It was a missed step in this long walk :slight_smile: ! if it’s OK i will say nothing! Thank you anyway!

1 Like

is QUIC something we should be worry about ? my longest running node suddenly got this. tried removing, reinstall storagenode, same result.
its on ubuntu 16 tho .

Not am I aware of. The issue could be related to your router/VPN, so it’s not really matter so far:

If you can fix it - good, not - it’s not bad, likely about or less below 10% of winning uploads/downloads.

just found the culprit. updated dockers to latest version,

apt update
apt upgrade

updated docker, restart the storagenode. problem resolved.

1 Like