Thanks Testing on my RPi Node. After 180 hours online started to get troubles.
You are welcome!
What kind of troubles?
“Unauthenticated desc = serial number is already used: infodb: database is locked” after 182 hours up and online.
@naxbc could you post your df output here?
df? Can you please specify?
df is a linux command, this shows up your disk usage (df = diskfree)
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 30469060 4522580 24375664 16% /
devtmpfs 494908 0 494908 0% /dev
tmpfs 499516 0 499516 0% /dev/shm
tmpfs 499516 19312 480204 4% /run
tmpfs 5120 4 5116 1% /run/lock
tmpfs 499516 0 499516 0% /sys/fs/cgroup
/dev/mmcblk0p6 66528 22818 43710 35% /boot
/dev/sda1 1953513556 513886380 1439627176 27% /media/pi/HDD
tmpfs 99900 0 99900 0% /run/user/1000
/dev/mmcblk0p5 30701 1590 26818 6% /media/pi/SETTINGS2
And the problem is still there? For others it has work to recreate the docker container.
We need more information to the infodb: database is locked problem
Who has seen this problem?
Did it fix automatically for you, or what you did?
What hardware are you using?
36Z ERROR server gRPC stream error response {“error”: “piecestore: rpc error: code = Unauthenticated desc = serial number is already used: usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:41\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:365\n\tstorj.io/storj/pkg/pb._Piecestore_Download_Handler:877\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1127\n\tgoogle.golang.org/grpc.(*Server).handleStream:1178\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696”, “errorVerbose”: “piecestore: rpc error: code = Unauthenticated desc = serial number is already used: usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:41\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:365\n\tstorj.io/storj/pkg/pb._Piecestore_Download_Handler:877\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1127\n\tgoogle.golang.org/grpc.(*Server).handleStream:1178\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:366\n\tstorj.io/storj/pkg/pb._Piecestore_Download_Handler:877\n\tstorj.io/storj/pkg/server.(*Server).logOnErrorStreamInterceptor:23\n\tgoogle.golang.org/grpc.(*Server).processStreamingRPC:1127\n\tgoogle.golang.org/grpc.(*Server).handleStream:1178\n\tgoogle.golang.org/grpc.(*Server).serveStreams.func1.1:696”} stderr
Same here…It´s happening to my SNO also.
Normally you see that when you stop the node to quickly and restart it. It normally fixes itself
Hi. Have the same problem. Everytime server restarts or the storj container stops and start it happened. Need to restart the node serval times. Can this happen after a node update?
Also, The “last contact” is very unstable after the last update.
You can just leave it running, it should fix itself.