Your node has been disqualified - 10TB Node

hi, my node had been active since the first alpha and has 9.8/10TB full. Before yesterday I turned off the node for a moment to change the base of the hard disk and when I reconnected it I had a “Database is locked” error … then they suspended me in some satellites and I did the VACUUM but today they are suspending me in the satellites.

What I can do ?

2020-05-15T09:07:34.184Z ERROR piecestore download failed {"Piece ID": "ZQA6KZTVRR2POHJEOMLCSS26WYWORYENGJQ6QU2IR4HUSFB3YGVQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

What were the exact steps you did? Did you remove the container and recreated it with the new path for the storage? If you didn’t remove the container then it started with the old paths when you started it again.

-Stop container
-take the hard drive out of the bay
-put it on an external diskstation (which didn’t work because it doesn’t support 10TB disk)
-post to put the disk where it was
-Start the container

then occurs “database is locked” errors.

thx

Did you mount it to the same directory, where the disk was before before you started the container? If you forgot to mount it or mounted it to a different directory, then the container created a new empty node in the mountpoint.

Also, did you unmounted the disk before taking it out of the bay? If not, then it could have corrupted the filesystem. Whatever the problem was, I’m afraid the node is gone, at least for the satellites where you got disqualified. Disqualification is not reversible. You might be able to recover from the suspensions.

But I can’t really help you there. You should wait for a Storjling to reply.

Yes, mount everything correct, even node reads config.yml fine.

Between the suspension and the disqualification, not even 24 hours passed, it does not seem to me a good idea that it is so fast and that you are not allowed to re-qualify your node in the satellites.

That means that if I can recover the node, it will only work with 1 satellite?

I will lose the Total Held Amount ($303) too, right?

Why don’t they allow to “repair” a node and pay a part of the Held Amount to download the damaged parts? I think it would be fairer with farmers.

Thank you very much for your help! :smiley:

1 Like

You’re not getting disqualified for downtime at the moment. Only if you fail audits and lost data. Can you run docker logs -t storagenode 2>&1 | grep "GET_AUDIT" | grep "download failed"?

i have cleared the logs today :grimacing:

 ~$ sudo docker logs -t storagenode 2>&1 | grep "GET_AUDIT" | grep "download failed"
    2020-05-15T09:07:34.184385385Z 2020-05-15T09:07:34.184Z ERROR   piecestore  download failed  {"Piece ID": "ZQA6KZTVRR2POHJEOMLCSS26WYWORYENGJQ6QU2IR4HUSFB3YGVQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T09:48:03.868822369Z 2020-05-15T09:48:03.868Z ERROR   piecestore  download failed  {"Piece ID": "XDK6SQ7QQJQ46OSYQY4MPSE34R5H33FAA6EKVRDULWX23RRNFXAQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T09:48:48.315248677Z 2020-05-15T09:48:48.314Z ERROR   piecestore  download failed  {"Piece ID": "3QYA63WKXGKJLZ74BNP37F432IF43ESOKYHYJFP6MAWBY2PSN36A", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T09:57:56.724581261Z 2020-05-15T09:57:56.724Z ERROR   piecestore  download failed  {"Piece ID": "UEIEBQWIFBAYRJTDJBU254HS4MYKEHRQ6BBLVAK3QWIQLG7LQRAQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T09:58:47.348970147Z 2020-05-15T09:58:47.348Z ERROR   piecestore  download failed  {"Piece ID": "RD4YW3MFL2OO2AMHNDK5AISPMUSW6KWSQGZQKT6DUO255M3LPOJA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T09:59:50.175573202Z 2020-05-15T09:59:50.175Z ERROR   piecestore  download failed  {"Piece ID": "IW4XXRDU6U6HIJZS7P7FB3LEWFMEETBYNJNV6VZ3D7OFBUJV7A4Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T11:00:10.320752669Z 2020-05-15T11:00:10.320Z ERROR   piecestore  download failed  {"Piece ID": "A5KXPCKSSHUC7CNKOP2XGAMNG5TXRLPXRF562RC3SIIIFKK5AFRA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T11:00:53.782494080Z 2020-05-15T11:00:53.782Z ERROR   piecestore  download failed  {"Piece ID": "NHTDZGCU7OW55EUZQ2DGZUCCEGFXGEQBZVVYBBFOSJIK76OD37JQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T11:01:33.922385008Z 2020-05-15T11:01:33.922Z ERROR   piecestore  download failed  {"Piece ID": "NXTQ2V5QIZBC6H2E6XBEG65N4T2MACTDPJP34LUVDBBC26LRPJVQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T11:05:51.638927951Z 2020-05-15T11:05:51.638Z ERROR   piecestore  download failed  {"Piece ID": "KH5SFUM2QL4SHGSTN6622LLABVFBAAH6OGLYES52ACMJ6ERV2EHA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
    2020-05-15T12:04:18.688866410Z 2020-05-15T12:04:18.688Z ERROR   piecestore  download failed  {"Piece ID": "BEITY4FNTSVEZJKYXVQXH4ELX4P2QPJATTKOB2FHLVCVYW64I5XA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_AUDIT", "error": "usedserialsdb error: database is locked", "errorVerbose": "usedserialsdb error: database is locked\n\tstorj.io/storj/storagenode/storagenodedb.(*usedSerialsDB).Add:35\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).verifyOrderLimit:76\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doDownload:523\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Download:471\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:995\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:66\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

At the moment you can have db locks for months without ever getting DQed. The DQ is still disabled. When enabled it would trigger after 7 days in suspension.

This means you got DQed for failing audits and not for db locks. If you give me your nodeID I can take a look into the satellite logs to double check that.

me too

Bonjour,
je n’ai que des téléchargement à partir de mon noeud et j’ai vérifié avec la commande
sudo docker logs -t storagenode 2>&1 | grep “GET_AUDIT” | grep “download failed”
voici le résultat
qu’est ce que je dois faire ?

pouvez vous vérifier mon noeud svp ?
12tiBuMMA19UhBr8w6oQ7oCEED5EgVF72qCx9WCadVCvroBt9nt

I am from germany btw :slight_smile:

1 Like

can you please check my knot?
because I just checked my node with errors
I only have downloads from my node and I checked with the order
sudo docker logs -t storagenode 2> & 1 | “GET_AUDIT” grep | grep “download failed”
Here is the result
what do I have to do ?

I just noticed that this command also includes recoverable failed audits. I think the command for unrecoverable failed audits should be

docker logs -t storagenode 2>&1 | grep "GET_AUDIT" | grep "download failed"| grep exist if I read the successrate script correctly.

Just checking…

When stopping your docker container, did you use an appropriately long wait time?

The default time is 10 seconds, the last time I checked the recommended wait time for a storj node is 300 seconds.

sudo docker stop -t 300 storagenode

If the node was shutdown right away without waiting for the databases to finish writing all data to disk, then your node may have lost track of those data pieces…

@littleskunk I remember you from the RocketChat :smile:

Node ID: 1Xsznco6jzPv2WCprEGwmvEcjP7gUys68tVFwHcSpwJnXvfb1K

I just got unsuspended from the SaltLake (test satellite?)

thx

error: "context deadline exceeded; piecestore: (Node ID: 1Xsznco6jzPv2WCprEGwmvEcjP7gUys68tVFwHcSpwJnXvfb1K, Piece ID: WK733JZWK3VXLIRMOLNQIG6YCNHLJ6E27YCSSGHKWC6C3V6CXOMA): context deadline exceeded"

The storage node didn’t deliver the audit data in 5 minutes. You had this error for several hours.

1 Like

How can I solve? , it appears to me that the node is online :S

I managed to solve the problem but the node is still suspended on the satellites, will it ever reactivate? or can I just remove the node? :confused:

From the screenshot your node has been disqualified on 5 satellites now. That cannot be reversed. The suspended satellite can be, but your node would only work on that single satellite. In order to get out of suspension you need to succeed on several audits. So it just takes some time. The satellite needs to verify you actually fixed the problem.