Audit scores dropping on ap1; Storj is working on a fix, SNO's don't need to do anything. ERROR piecedeleter could not send delete piece to trash >> Pieces error: v0pieceinfodb: sql: no rows in result set

this is unlikely to be related to the issue on AP1, if you are seeing similar errors on EU1, please post your log on a new thread, lets keep this one focused on the issue on AP1 thanks.

2 Likes

Thanks for pointing this out - devs are already aware of this, but this will not affect node reputation. They are already working on applying the fix so please stand by.

1 Like

12 hours without an audit failure on any satellite… woop woop
seems like the fix is working,

a bit to close for comfort, ofc thats easy to say…

I have this situation by today morning:

I was looking into the storagenode.log file and found quite many of these kind of lines, 1 or 2 almost in every minute:

2021-07-24T12:22:23.248+0200 ERROR piecestore download failed {Piece ID: ZGW2ZEQM44G2PGUU2LINYHVI5HKUIMPSZD4OJNVL3MRVFSFW2YKQ, Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, Action: GET_REPAIR, error: file does not exist, errorVerbose: file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:534\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:217\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:102\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:60\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:95\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51}
2021-07-24T12:22:29.947+0200 INFO piecestore upload started {Piece ID: ENIIT5INBZAFCCG5YZ2IK6X3O47JVXZEXKQW32NLDHKPUGOZT46Q, Satellite ID: 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB, Action: PUT_REPAIR, Available Space: 1124694833152}
2021-07-24T12:22:30.584+0200 INFO piecestore uploaded {Piece ID: ENIIT5INBZAFCCG5YZ2IK6X3O47JVXZEXKQW32NLDHKPUGOZT46Q, Satellite ID: 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB, Action: PUT_REPAIR, Size: 2319360}
2021-07-24T12:22:30.978+0200 INFO piecestore download started {Piece ID: AW77ICRZWM5XAWRQ4VQ772BERNBS4XCGIYI4XERBHIY7ROVWHPAQ, Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Action: GET_REPAIR}
2021-07-24T12:22:31.052+0200 INFO piecestore downloaded {Piece ID: AW77ICRZWM5XAWRQ4VQ772BERNBS4XCGIYI4XERBHIY7ROVWHPAQ, Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Action: GET_REPAIR}
2021-07-24T12:22:31.360+0200 INFO piecestore download started {Piece ID: T32U7HMCFBVGDQPZ6MXTJVHUJNFIRSSQFXNE7T7YLOG7OHTGYHFQ, Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, Action: GET_REPAIR}
2021-07-24T12:22:31.442+0200 ERROR piecestore download failed {Piece ID: T32U7HMCFBVGDQPZ6MXTJVHUJNFIRSSQFXNE7T7YLOG7OHTGYHFQ, Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, Action: GET_REPAIR, error: file does not exist, errorVerbose: file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:534\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:217\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:102\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:60\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:95\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51}

Also, I see these in the log:

2021-07-24T12:17:36.300+0200 WARN contact:service Your node is still considered to be online but encountered an error. {Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Error: contact: failed to dial storage node (ID: 1FYXCFTDX4DN5R5Y5GHTmvBrttN3sBYWawiWzgx6UNdSdQameJ) at address birtok.myddns.me:28967 using QUIC: rpc: quic: timeout: no recent network activity}
2021-07-24T12:17:36.422+0200 WARN contact:service Your node is still considered to be online but encountered an error. {Satellite ID: 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB, Error: contact: failed to dial storage node (ID: 1FYXCFTDX4DN5R5Y5GHTmvBrttN3sBYWawiWzgx6UNdSdQameJ) at address birtok.myddns.me:28967 using QUIC: rpc: quic: timeout: no recent network activity}
2021-07-24T12:17:36.778+0200 WARN contact:service Your node is still considered to be online but encountered an error. {Satellite ID: 12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo, Error: contact: failed to dial storage node (ID: 1FYXCFTDX4DN5R5Y5GHTmvBrttN3sBYWawiWzgx6UNdSdQameJ) at address birtok.myddns.me:28967 using QUIC: rpc: quic: timeout: no recent network activity}
2021-07-24T12:17:37.080+0200 WARN contact:service Your node is still considered to be online but encountered an error. {Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, Error: contact: failed to dial storage node (ID: 1FYXCFTDX4DN5R5Y5GHTmvBrttN3sBYWawiWzgx6UNdSdQameJ) at address birtok.myddns.me:28967 using QUIC: rpc: quic: timeout: no recent network activity}
2021-07-24T12:17:37.292+0200 WARN contact:service Your node is still considered to be online but encountered an error. {Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Error: contact: failed to dial storage node (ID: 1FYXCFTDX4DN5R5Y5GHTmvBrttN3sBYWawiWzgx6UNdSdQameJ) at address birtok.myddns.me:28967 using QUIC: rpc: quic: timeout: no recent network activity}
2021-07-24T12:17:37.763+0200 INFO piecestore uploaded {Piece ID: U4HVWXSMWPQQZXW22NR3KSFBMUGO4QB2HJ4POZQ47DFDAOFR3UGA, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT_REPAIR, Size: 2212096}
2021-07-24T12:17:37.824+0200 WARN contact:service Your node is still considered to be online but encountered an error. {Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, Error: contact: failed to dial storage node (ID: 1FYXCFTDX4DN5R5Y5GHTmvBrttN3sBYWawiWzgx6UNdSdQameJ) at address birtok.myddns.me:28967 using QUIC: rpc: quic: timeout: no recent network activity}
2021-07-24T12:17:39.848+0200 INFO piecestore download started {Piece ID: C7BKMP5LMBRTRFMXZDFZAUDHCMQXU6G477L36FCEPUMZFNN5U6DQ, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: GET_REPAIR}
2021-07-24T12:17:40.658+0200 INFO piecestore download started {Piece ID: SWTKZD53CVH3ZVHOFMESAI6UWWL5SJFOE4NB4B6UZEDET2EYU4TQ, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET_REPAIR}
2021-07-24T12:17:40.807+0200 INFO piecestore downloaded {Piece ID: SWTKZD53CVH3ZVHOFMESAI6UWWL5SJFOE4NB4B6UZEDET2EYU4TQ, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET_REPAIR}
2021-07-24T12:17:40.838+0200 INFO piecestore downloaded {Piece ID: C7BKMP5LMBRTRFMXZDFZAUDHCMQXU6G477L36FCEPUMZFNN5U6DQ, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: GET_REPAIR}
2021-07-24T12:17:41.056+0200 INFO piecestore upload started {Piece ID: TTOYNX4I4O6TLBVYDNNZ3N6BCMLSV44ADLHN7RRILQFF64VUZSIQ, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT, Available Space: 1124741660928}
2021-07-24T12:17:42.360+0200 INFO piecestore uploaded {Piece ID: TTOYNX4I4O6TLBVYDNNZ3N6BCMLSV44ADLHN7RRILQFF64VUZSIQ, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT, Size: 181504}
2021-07-24T12:17:42.543+0200 INFO piecestore upload started {Piece ID: 77ZAS4SMHPJEF5CRABRYOED6EEAIPUSPAWNYDE7NE5F47S3VIC6Q, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT, Available Space: 1124741478912}
2021-07-24T12:17:44.838+0200 INFO piecestore download started {Piece ID: XFTBOE5Y7JNCD4FIYFSPHUOWKFSE25Y3D2EO3Y67S2XW2MOP7NEA, Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, Action: GET_REPAIR}
2021-07-24T12:17:45.143+0200 INFO piecestore downloaded {Piece ID: XFTBOE5Y7JNCD4FIYFSPHUOWKFSE25Y3D2EO3Y67S2XW2MOP7NEA, Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, Action: GET_REPAIR}
2021-07-24T12:17:48.128+0200 INFO piecestore upload started {Piece ID: MDI5T6PPSOQUGTKRT65NTYCZB3WDKBS2KXUJVDLEHCI5LLVJOTWQ, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT, Available Space: 1124741478912}
2021-07-24T12:17:48.431+0200 INFO piecestore download started {Piece ID: HRBQDR6PBOUKPMYLOMSE3IQH3XQXXD6RSMCRSM77MGYQGTKFMFGQ, Satellite ID: 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB, Action: GET_REPAIR}
2021-07-24T12:17:48.705+0200 INFO piecestore upload started {Piece ID: EJBQ54NCDPB6UFX6F44IZJWMFFJ5AFYJV4LDV64SGXR5ETOIA5TA, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, Action: PUT, Available Space: 1124741478912}
2021-07-24T12:17:48.829+0200 INFO piecestore upload started {Piece ID: K4DK2ZB55XNZFB76U4R26S7EO4UHI267GGVYTTD2IBA7ZDS3A4MQ, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: PUT, Available Space: 1124741478912}
2021-07-24T12:17:49.146+0200 INFO piecestore uploaded {Piece ID: K4DK2ZB55XNZFB76U4R26S7EO4UHI267GGVYTTD2IBA7ZDS3A4MQ, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: PUT, Size: 145408}

Is it related to this issue?

Spoke to soon… the problem persists.
during the last 4 hours i got another 2 failures on trying to track them down…

2 posts were merged into an existing topic: Audit weirdness

The failed deletions seem to have stopped and I don’t seem to be getting audited on them anymore, however I’m still failing repair traffic for these deleted pieces (which I guess is sort of correct as they are deleted). I guess that’s the next part of the fixup.

2021-07-24T09:24:03.165+0100 ERROR piecestore download failed {"Piece ID": "ZY6QV3GFEUZBTNSFYF6MWWHNMUXGVV6VDRPU6R2I66DMMEK6XJWQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:534\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:217\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:102\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:60\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:95\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

it does seem to be running a lot better… but still seeing issues, even if very rarely… odd thing is that i cannot find it in my logs… i can only assume either my logs or my dashboard is wrong.

been looking everywhere can i just cannot find the failed audit for ap1, even tho it just dropped on in audit score on the dashboard…

even checking my log consistency i am recording every minut… and since it exports on a 10minute schedule… and docker will save the last 10MB worth of logs…

but yeah repairs for today sure doesn’t look to be doing to well
usually that is 99.5% or better

========== REPAIR DOWNLOAD ====
Failed:                4030
Fail Rate:             8.720%
Canceled:              0
Cancel Rate:           0.000%
Successful:            42188
Success Rate:          91.280%

us2 is having a different issue, discussed here. Audit weirdness (us2 satellite). (Team aware. No additional information is needed)

But please verify by looking at the piece history. If it’s the same as mentioned there, you can post logs in the appropriate topic. If you’re seeing a different issue, please start a new topic. Let’s try and keep this one focused on the issue on ap1. (Also looking at you @SGC :wink: )

1 Like

Moved out to Audit weirdness

1 Like

No. See Warning and error logs since updating to latest releasse

I have the same for ap1 sattelite now

1 Like

Yep - AP1 hit for me too.

The AP1 failures are related to a different issue.

I seem to be failing audits on ap1.storj.io as it’s auditing pieces it’s already deleted.
This is a satellite issue not a storagenode issue.

1 Like

seems so yes… i’m not really to familiar with the storj network mechanics… but brightsilence suggested the same thing, it’s a satellite issue.

i’m going to shutdown my nodes if the satellites audits start hitting 70% or so…
to avoid DQ

1 Like

I’m not seeing anything hit my dashboard yet, it’s still all 100%, but there are errors in my logs. My newest node seems hit the hardest.

1 of these:
021-07-23T00:30:03.080Z ERROR piecestore download failed {“Piece ID”: “M4JJX762SFXPUQGEMQWSIDRJHDN7KDCEW3X5ABXFGRY7ZEOJTDKQ”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET_AUDIT”, “error”: “file does not exist”

10 of these, same sat:
2021-07-23T00:30:03.080Z ERROR piecestore download failed {“Piece ID”: “M4JJX762SFXPUQGEMQWSIDRJHDN7KDCEW3X5ABXFGRY7ZEOJTDKQ”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET_AUDIT”, “error”: “file does not exist”

80 of these, mostly same sat:
Piece ID": “WXMCZYCAYEWVSOVWUL3L4NK26WL47OSE6UZ6LJFK6OIJJ3R4N4KA”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Action”: “PUT”, “error”: “unexpected EOF”

Seems to be another satelite giving bad audits as well.
Found checked wrong log.

2021-07-22T23:42:22.240Z ERROR piecestore download failed {“Piece ID”: “GJLVEDSUDLETGCTXQCG2LKLJUEWAN2LOKSRUSMOVZRMUA7PQSWLQ”, “Satellite ID”: “12 1RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET_AUDIT”, “erro r”: “file does not exist”, “errorVerbose”: “file does not exist\n\tstorj.io/comm on/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Do wnload:534\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:217\n\ts torj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Han dler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:102\n\tstorj. io/drpc/drpcserver.(*Server).ServeOne:60\n\tstorj.io/drpc/drpcserver.(*Server).S erve.func2:95\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51”}

image

@deathlessdd @KernelPanick @SGC @waistcoat @Ted @LinuxNet
Please, search for all records from your logs with this piece.
Here is a problem only with delete expired and then download failed. If your node lost pieces for other reasons - it’s not related to the current problem with us2 satellite.

Unfortunately I dont have logs far back cause it recently updated, But if more and more people see the same issues its not cause my node lost a file.
But my node is still failing audits.

2021-07-22T23:53:46.965Z        ERROR   piecestore      download failed {"Piece ID": "ORX5MM6SCZOUJ5HPTU7QRDON34EKYLAJOUUUO4HC4WZ2HEKPEX6Q", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "Action": "GET_AUDIT", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:534\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:217\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:102\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:60\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:95\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}
2021-07-23T10:28:23.241Z        ERROR   piecestore      download failed {"Piece ID": "RAYFOF5VE2LYJQBGXJ443VT3L3C7RLE2BDQCEJAXSL2VJJAO5OCA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_AUDIT", "error": "file does not exist", "errorVerbose": "file does not exist\n\tstorj.io/common/rpc/rpcstatus.Wrap:73\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download:534\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func2:217\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:58\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:102\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:60\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:95\n\tstorj.io/drpc/drpcctx.(*Tracker).track:51"}

image