Error Codes: What they mean and Severity Level [READ FIRST]

Hello dear SNO’s.

Today my node stopped receiving/uploading data and i can only see some strange errors, that i could not find on this page. Should i worry about it ?

“2020-01-08T19:07:14.321Z ERROR orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 rpc client error when receiveing new order settlements {“error”: “order: failed to receive settlement response: only specified storage node can settle order”, “errorVerbose”: “order: failed to receive settlement response: only specified storage node can settle order\n\tstorj.io/storj/storagenode/orders.(*Service).settle:304\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:195\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:174\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}
2020-01-08T19:07:14.598Z ERROR orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 rpc client when sending new orders settlements {“error”: “order: sending settlement agreements returned an error: EOF”, “errorVerbose”: “order: sending settlement agreements returned an error: EOF\n\tstorj.io/storj/storagenode/orders.(*Service).settle.func2:276\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”, “request”: {“limit”:{“serial_number”:“FGJ2OZWXJVAU3PIKJJDGXM6FHA”,“satellite_id”:“121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”,“uplink_public_key”:{},“storage_node_id”:“1TLrkfdpxkGs1XQTsCQpQs1fS1SHH2H5kkYtnEDYoCffYcarbv”,“piece_id”:“ZCBK5JR6VYIQIGY2LJAWXVIGLHRMZ2YNK4JJRRLX2CTG7S7HZQ5A”,“limit”:2317056,“action”:1,“piece_expiration”:“0001-01-01T00:00:00Z”,“order_expiration”:“2020-01-13T19:21:17.525396808Z”,“order_creation”:“2020-01-06T19:21:17.526994442Z”,“satellite_signature”:“MEYCIQD3dLZObFSxG29SkCJT8KRCNDr2AplO1h/BmXMWA8WXgwIhAPUanOfDK3dt70TxhYz2M0gnnIPTzY0OFSAllfg8ZUAU”,“satellite_address”:{}},“order”:{“serial_number”:“FGJ2OZWXJVAU3PIKJJDGXM6FHA”,“amount”:2028288,“uplink_signature”:“HQwDBIpBXmJs2ecT0Fb98r5IBpCyJSzjxixcI9KY+8xkHpAqUumcIwK2kBFbElQLbrWotaSkar4XeTfoFZg6AA==”}}}
2020-01-08T19:07:14.599Z INFO orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 finished
2020-01-08T19:07:14.599Z ERROR orders.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 failed to settle orders {“error”: “order: failed to receive settlement response: only specified storage node can settle order; order: sending settlement agreements returned an error: EOF”, “errorVerbose”: “group:\n— order: failed to receive settlement response: only specified storage node can settle order\n\tstorj.io/storj/storagenode/orders.(*Service).settle:304\n\tstorj.io/storj/storagenode/orders.(*Service).Settle:195\n\tstorj.io/storj/storagenode/orders.(*Service).sendOrders.func2:174\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57\n— order: sending settlement agreements returned an error: EOF\n\tstorj.io/storj/storagenode/orders.(*Service).settle.func2:276\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57”}”

Screen Shot 2020-01-08 at 21.10.03 PM

thank you.

Check your path to identity folder. Make sure it has 6 files in the folder.

@nerdatwork they 100% are in this folder. As it was working (see the image).
But i believe that i might messed up something while swaping nodes. Probably will start a new topic about it.

3 posts were split to a new topic: Download: file not found

2 posts were split to a new topic: Error: rpccompat: dial tcp 127.0.0.1:7778: connect: connection refused

10 posts were split to a new topic: Contact: service ping satellite failed

Hi. I’m a begginer and I tried to do my node run but I tried several times and my node still offline. Please help me. i want to make this biger than that and i want some tips.
you can contact me on whatsapp for faster communication. 0040773345041 Please contact me!!

I have this error, but I don’t see anything about it:
ERROR piecestore upload failed
“error”: “pieces error: filestore error: chmod config/storage/temp/blob-119950743.partial: no such file or directory”

It seems to be there is a miss file, but all the rest work normally.

Log :
May 21 09:41:37 onestorj1 box[835]: 2020-05-21T09:41:37.940Z
ERROR piecestore upload failed
{“Piece ID”: “VJB4MRJWTGFERCZRKMNFT2MMZEA6HFBLFRLBZYRG6NTYTDZQWOMA”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Action”: “PUT”, “error”: “pieces error: filestore error: chmod config/storage/temp/blob-119950743.partial: no such file or directory”, “errorVerbose”: “pieces error: filestore error: chmod config/storage/temp/blob-119950743.partial: no such file or directory\n\tstorj.io/storj/storage/filestore.(*blobWriter).Commit:120\n\tstorj.io/storj/storagenode/pieces.(*Writer).Commit.func1:129\n\tstorj.io/storj/storagenode/pieces.(*Writer).Commit:197\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).doUpload:416\n\tstorj.io/storj/storagenode/piecestore.(*drpcEndpoint).Upload:208\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:997\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:107\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:56\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

So this could mean many things, but a deleted file isn’t one of them. It almost looks like it failed to create the file and then tried to modify the file it thought it created or something similar. That being said, unless you start seeing audits for that Piece ID, I wouldn’t worry about it.

6 posts were split to a new topic: Usedserialsdb error: disk I/O error: The device is not ready

Post your log between 3 backticks ```

2 posts were split to a new topic: Node offline, messed up with identity

A post was split to a new topic: Pieces error: marshaled piece header too big!

Hello,

I got this error today (It happened only once) :

2021-03-11T17:02:11.121Z	ERROR	piecestore	download failed	{"Piece ID": "WY64G3BUUNWGA7QD2ZVY33HXSIQ3ZLBWFFD7R2RCOSLTE35FXRCQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:74\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:506\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1033\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:29\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

The disk shows no error. The node never hung-up, just the regular Storj container upgrades restarts.

Is there something I need to worry about ?

All the best !

You lost the piece that is why it says “file does not exist”. Did you have a sudden restart or power outage ?

Nope, never. the node is on an UPS

How old is your node ? You can search your log when the piece was uploaded. Your hardware could have restarted a month back and this only showed up now when the piece was getting downloaded.

Hello,

Thanks for your reply.

The node is up from early January, but I cleared the log after the upload.

I’ve searched more into the log.

  • A grep on the file returns this interesting entries :
2021-03-09T23:43:34.521Z INFO piecestore download started {“Piece ID”: “WY64G3BUUNWGA7QD2ZVY33HXSIQ3ZLBWFFD7R2RCOSLTE35FXRCQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2021-03-09T23:43:35.479Z INFO piecestore downloaded {“Piece ID”: “WY64G3BUUNWGA7QD2ZVY33HXSIQ3ZLBWFFD7R2RCOSLTE35FXRCQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2021-03-10T20:23:47.160Z INFO **collector delete expired** {“Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “WY64G3BUUNWGA7QD2ZVY33HXSIQ3ZLBWFFD7R2RCOSLTE35FXRCQ”}
2021-03-10T21:42:54.825Z INFO piecestore download started {“Piece ID”: “WY64G3BUUNWGA7QD2ZVY33HXSIQ3ZLBWFFD7R2RCOSLTE35FXRCQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”}
2021-03-10T21:42:54.873Z ERROR piecestore download failed {“Piece ID”: “WY64G3BUUNWGA7QD2ZVY33HXSIQ3ZLBWFFD7R2RCOSLTE35FXRCQ”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “GET”, “error”: “file does not exist”, “errorVerbose”: “file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:74\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:506\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:1033\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:29\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:111\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:62\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:99\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

What’s interesting is that happening after the " delete expired " collector message yesterday.

  • Then I greped on the “delete expired” string and I got many files from 4 to 11 March 2021, all from the EU West satellite.

So How a file which is supposed to be deleted is requested for download ? There is something that I don’t understand.

Hi, this is an expected behavior of the system.

There are a bunch of things that happen asynchronously, hence there’s some eventual consistency behaviors that might not be obvious at first sight.

Let’s take an example scenario that demonstrates it:

  1. At 11:15:00 Alice on computer A sends a request to download an object to the satellite, then satellite responds with the needed order limits for the specific storage nodes.
  2. At 11:15:01 Bob on computer B sends a delete request for that object, which ends up sending a delete request to the storage node. Storage node deletes the piece.
  3. At 11:15:03 Alice on computer A starts using the order limits (there was a latency spike on network, which caused it to take a bit longer)… however, now, none of the storage nodes have that data anymore.

This ends up in the logs as:

  1. object deleted
  2. download started
  3. download failed

With object expiration, the timeframe when this can happen is larger, because the satellite takes more time to delete expired objects/segments than the storage node.

6 Likes

Hello,

Thanks for the explanation. It’s clear now.

I was just worrying that something happens on my node potentially causing issue, especially for the GE process.

All the best !