Hello,
I’ve been a node operator since February of 2019, and today I’ve noticed my main node (~16.4TB stored) has been suspended from europe-north-1.tardigrade.io:7777 (12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB)
Looking around, I see this GitHub issue: GET REPAIR errors showing up on storage nodes · Issue #4687 · storj/storj · GitHub
The errors I’m receiving are similar to those presented there:
2022-04-19T17:37:22.102Z INFO piecestore download started {"Piece ID": "HFS3LO35OZYVZRQUCNAWIMLQVEUGQOD4Q45CRBDRIG6ZM3FIHGVA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-04-19T17:37:22.102Z ERROR piecestore download failed {"Piece ID": "HFS3LO35OZYVZRQUCNAWIMLQVEUGQOD4Q45CRBDRIG6ZM3FIHGVA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR", "error": "used serial already exists in store", "errorVerbose": "used serial already exists in store\n\tstorj.io/storj/storagenode/piecestore/usedserials.insertSerial:263\n\tstorj.io/storj/storagenode/piecestore/usedserials.(*Table).Add:117\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:498\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:228\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:122\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:66\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:112\n\tstorj.io/drpc/drpcctx.(*Tracker).track:52"}
2022-04-19T17:37:27.431Z INFO piecestore download started {"Piece ID": "WWVN5WZNDJTM5KWW67ZSVK2ROSILG37CKY6UVE7XFK2YPH75LR4Q", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-04-19T17:37:29.324Z INFO piecestore downloaded {"Piece ID": "WWVN5WZNDJTM5KWW67ZSVK2ROSILG37CKY6UVE7XFK2YPH75LR4Q", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
With the GitHub link I posted above, can someone from Storj confirm that I’m experiencing the same issue? I’m seeing similar errors within the ap1.storj.io (121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6) node as well.
Is there any actions I need to do on my end? My Uptime is going on 300h+, and QUIC is “OK”. I have 5 total nodes, and all appear to be exhibiting this issue (and they’re at different locations).