Node updated to 1.58.2 and suspended on all satellites

I was on my node the other day and it was fine, today I logged in to check it out and see this:

Your node has been suspended on 12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB. If you have any questions regarding this please check our Node Operators thread on Storj forum.

Seems a bit extreme to automatically suspend a node that SAYS it’s up.

This is running on a Synology NAS and has been up for over a year now, I’m not sure what happened in the past week outside of a DSM update to 7.1.

Here’s one sample log entry.

2022-07-14T18:18:59.779Z ERROR piecestore download failed {“Process”: “storagenode”, “Piece ID”: “LUFVATPHOKMO72CUXXC5XYSTZWJEVLW7GIJJXFZ3QWWYUYKZTCJA”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Action”: “GET”, “error”: “pieces error: filestore error: unable to open "config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/lu/fvatphokmo72cuxxc5xystzwjevlw7gijjxfz3qwwyuykztcja.sj1": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/lu/fvatphokmo72cuxxc5xystzwjevlw7gijjxfz3qwwyuykztcja.sj1: permission denied”, “errorVerbose”: “pieces error: filestore error: unable to open "config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/lu/fvatphokmo72cuxxc5xystzwjevlw7gijjxfz3qwwyuykztcja.sj1": open config/storage/blobs/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/lu/fvatphokmo72cuxxc5xystzwjevlw7gijjxfz3qwwyuykztcja.sj1: permission denied\n\tstorj.io/storj/storage/filestore.(*Dir).Open:279\n\tstorj.io/storj/storage/filestore.(*blobStore).Open:75\n\tstorj.io/storj/storagenode/pieces.(*Store).Reader:262\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:542\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52”}

Seems you have permission issues accessing the SN data. Have you changed anything else on your system?

Only updated DSM to v7.1. I”m running a volume scan right now but if my data is wrecked that would suck. 5TB worth sitting on this nas.

In Synology it was always with “sudo”

sudo docker run -d --restart unless-stopped --stop-timeout 300 -p 28.....

besides, you have a strange data path, it should be something like this:

/volume1/.............

Here’s what I’m running:

sudo docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 14002:14002
-e WALLET=“WALLETADDRESS”
-e EMAIL=“EMAILADDRESS
-e ADDRESS=“externalIP:28967”
-e STORAGE=“20TB”
–user $(id -u):$(id -g)
–mount type=bind,source=“/volume1/storj/Identity/storagenode”,destination=/app/identity
–mount type=bind,source=“/volume1/storj/data”,destination=/app/config
–name storagenode storjlabs/storagenode:latest

These paths inside the docker container, so they always will start with /app/config independently of the actual location on the host

It’s better to remove this line for Synology, especially if you run the container with sudo.
And perhaps you will need to replace owner to the root recursively after that.
If you have used the --user option from the start - then it should remain, however in this case you likely will need to update permissions anyway but in this case to your user.

Ok I removed that line and don’t get errors anymore but my node still says it’s suspended. Will that ever change?

As soon as your node will start to pass audits, the suspension score should recover. When it will be greater than 60%, the node will went out of suspension.

Ok good, the highest % is 5% with most being around .5%.

My audits are at 100% if that matters.

1 Like

Crossing fingers. Keep you log watching for errors and be patient. Will need some time to recover.

1 Like

Thanks! Checked out my node and it’s un-suspended on all satellites. Thanks for the help @Alexey and everybody else. I somehow got a typo into my docker startup.

2 Likes