Graceful exit error: failed to send notification about piece transfer

SO i have started graceful exit and the percentage in all satellites is 0 after about 2 hours.
Now i checked my logs and i see all these errors.

Is there anything i can do?

2023-07-08T12:02:27.893Z	ERROR	gracefulexit:chore.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB@europe-north-1.tardigrade.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "error": "write tcp 172.17.0.3:60168->35.228.31.57:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:60168->35.228.31.57:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:02:35.103Z	ERROR	gracefulexit:chore.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB@europe-north-1.tardigrade.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "error": "write tcp 172.17.0.3:60168->35.228.31.57:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:60168->35.228.31.57:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:02:35.422Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:02:37.533Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:02:56.793Z	ERROR	gracefulexit:chore.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB@europe-north-1.tardigrade.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "error": "write tcp 172.17.0.3:60168->35.228.31.57:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:60168->35.228.31.57:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:02:57.968Z	ERROR	gracefulexit:chore.12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB@europe-north-1.tardigrade.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "error": "write tcp 172.17.0.3:60168->35.228.31.57:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:60168->35.228.31.57:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:06.038Z	ERROR	piecetransfer	failed to put piece	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Piece ID": "HJBYGX4ESJZZSIN5YD6MU6NZUHWCDHYSTKMCPL4CXXW56SUYZTTQ", "Storagenode ID": "1a2beRVJZDXs3GSQ8U2inz1kgEgrc3bFRPcyrAG8hNjqXHhqAB", "error": "ecclient: upload failed (node:1a2beRVJZDXs3GSQ8U2inz1kgEgrc3bFRPcyrAG8hNjqXHhqAB, address:86.52.176.37:28967): protocol: expected piece hash; context deadline exceeded; EOF", "errorVerbose": "ecclient: upload failed (node:1a2beRVJZDXs3GSQ8U2inz1kgEgrc3bFRPcyrAG8hNjqXHhqAB, address:86.52.176.37:28967): protocol: expected piece hash; context deadline exceeded; EOF\n\tstorj.io/uplink/private/ecclient.(*ecClient).PutPiece:244\n\tstorj.io/storj/storagenode/piecetransfer.(*service).TransferPiece:148\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:100\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:06.038Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:06.713Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:06.916Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:07.993Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:08.938Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:18.751Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:19.171Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:22.839Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}
2023-07-08T12:03:31.662Z	ERROR	gracefulexit:chore.121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@ap1.storj.io:7777	failed to send notification about piece transfer.	{"process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "error": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer", "errorVerbose": "write tcp 172.17.0.3:42020->34.80.215.116:7777: write: connection reset by peer\n\tstorj.io/drpc/drpcstream.(*Stream).rawFlushLocked:401\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:462\n\tstorj.io/common/pb.(*drpcSatelliteGracefulExit_ProcessClient).Send:83\n\tstorj.io/storj/storagenode/gracefulexit.(*Worker).Run.func3:101\n\tstorj.io/common/sync2.(*Limiter).Go.func1:49"}

Very strange though at the same time i have some logs showing that everything is ok and for some reason i am still downloading data…

2023-07-08T12:10:16.201Z	INFO	piecestore	download started	{"process": "storagenode", "Piece ID": "S7XVS53LMGA5ATOCYNRCFTOFGUWBSBKUIPPOANZF7QNQCAROYFXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 145152, "Remote Address": "216.66.40.82:60876"}
2023-07-08T12:10:17.044Z	INFO	piecestore	downloaded	{"process": "storagenode", "Piece ID": "S7XVS53LMGA5ATOCYNRCFTOFGUWBSBKUIPPOANZF7QNQCAROYFXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 0, "Size": 145152, "Remote Address": "216.66.40.82:60876"}
2023-07-08T12:10:19.118Z	INFO	piecestore	download started	{"process": "storagenode", "Piece ID": "S7XVS53LMGA5ATOCYNRCFTOFGUWBSBKUIPPOANZF7QNQCAROYFXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 507136, "Size": 181504, "Remote Address": "216.66.40.82:24726"}
2023-07-08T12:10:19.749Z	INFO	piecestore	downloaded	{"process": "storagenode", "Piece ID": "S7XVS53LMGA5ATOCYNRCFTOFGUWBSBKUIPPOANZF7QNQCAROYFXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 507136, "Size": 181504, "Remote Address": "216.66.40.82:24726"}
2023-07-08T12:10:20.935Z	INFO	piecestore	download started	{"process": "storagenode", "Piece ID": "S7XVS53LMGA5ATOCYNRCFTOFGUWBSBKUIPPOANZF7QNQCAROYFXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 869632, "Size": 181248, "Remote Address": "216.66.40.82:23520"}
2023-07-08T12:10:21.533Z	INFO	piecestore	downloaded	{"process": "storagenode", "Piece ID": "S7XVS53LMGA5ATOCYNRCFTOFGUWBSBKUIPPOANZF7QNQCAROYFXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 869632, "Size": 181248, "Remote Address": "216.66.40.82:23520"}
2023-07-08T12:10:22.607Z	INFO	piecestore	download started	{"process": "storagenode", "Piece ID": "S7XVS53LMGA5ATOCYNRCFTOFGUWBSBKUIPPOANZF7QNQCAROYFXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 1231872, "Size": 181504, "Remote Address": "216.66.40.82:23520"}
2023-07-08T12:10:23.454Z	INFO	piecestore	downloaded	{"process": "storagenode", "Piece ID": "S7XVS53LMGA5ATOCYNRCFTOFGUWBSBKUIPPOANZF7QNQCAROYFXA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET", "Offset": 1231872, "Size": 181504, "Remote Address": "216.66.40.82:23520"}

And some normal transfers to other nodes as well:

2023-07-08T12:09:57.084Z	INFO	piecetransfer	piece transferred to new storagenode	{"process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Piece ID": "LXI6JC4YFISTROGZOIE54JCCGAK527OVDBRRSML3S34LANHSPN2Q", "Storagenode ID": "1Fgu7ESXYYhkQ15ceCdmAbb7QWh8QM3rcNZw8Dfs84E9Jhai4Y"}

Have you changed default values for graceful exit?
Because errors “connection reset by peer” means that the network is unstable, perhaps your router cannot keep up and/or handle many parallel connections, or you use WiFi.

This is mean that the target node returned a wrong hash for the transferred piece (or did not return it at all). It could mean that this piece is corrupted on your side or corrupted during transfer (this again raise a question about your network). Your node will try to transfer this piece 4 more times to 4 different nodes before the satellite will consider it as a failed transfer.
So, if this piece is transferred at the end, then it’s a target node to blame.

update:
seems errors related to the satellites:

is a bug and our engineers are working to fix it.

This is my current status after 40 hours:

2023-07-09T19:01:22.485Z	INFO	Configuration loaded	{"process": "storagenode", "Location": "/app/config/config.yaml"}
2023-07-09T19:01:22.485Z	INFO	Anonymized tracing enabled	{"process": "storagenode"}
2023-07-09T19:01:22.526Z	INFO	Identity loaded.	{"process": "storagenode", "Node ID": "1d2wvtcmGsUhhGY3Ud8kiFqYNSPx3dPygAzuKu7RwioBZovidr"}

Domain Name                        Node ID                                              Percent Complete  Successful  Completion Receipt
us2.storj.io:7777                  12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo   100.00%           Y           0a4730450221009785af6c20b5dde2cf6c2da36de3a59c20c911861b52ae83a5a90fd48c400a230220345e0495bddb167698996279b742fb30560ae95106dfebbedd74cc74a3b28ef5122004489f5245ded48d2a8ac8fb5f5cd1c6a638f7c6e75efd800ef2d720000000001a2051d191a3fff24419002d88cab95b2478423fc5b619ee7b70223835b000000000220c088daca5a50610a895fef902  
saltlake.tardigrade.io:7777        1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE   0.56%             N           N/A                                                                                                                                                                                                                                                                                                                     
ap1.storj.io:7777                  121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6  0.06%             N           N/A                                                                                                                                                                                                                                                                                                                     
us1.storj.io:7777                  12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S  0.01%             N           N/A                                                                                                                                                                                                                                                                                                                     
eu1.storj.io:7777                  12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs  0.01%             N           N/A                                                                                                                                                                                                                                                                                                                     
europe-north-1.tardigrade.io:7777  12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB  0.66%             N           N/A  

Doesn’t seem to be making a lot of progress…

My network is pretty fine with ethernet 100Mbps. I monitor errors with my script i have posted here as well and i never received errors more than 3% errors before i started the graceful exit.

Only think i want to make sure is to receive my held amount which is quite a lot for me right now…

There is a bug in that the satellite can time-out while your node is processing and then it forgets what the node was supposed to be transmitting. It is currently being worked on. I don’t have an ETA, but I would assume it will be fixed soon. In the meantime you can continue with GE as some of the work will get done as you’ve noted, it will just be very slow until the bug is addressed.

Well my node just got disqualified from one of the satellites.
Did i just lost the held amount from this one or is there any way to get it back??

Is there any way to get support or do something about this?

My node is pretty fine and i would rather not lose my money for a bug which is not even in my side…

**btw i know about this and i have no fail audits

We note you put in a support ticket on this. Alexey will provide guidance on next steps. But don’t worry you haven’t lost your held amount.

1 Like

I would need your logs since the GE start, because there is incredible amount of failed transfers from your node (more than 60% :warning: on Europe-North-1 and Saltlake and I do not believe that we have more than 7700 unreliable nodes (each piece will be attempted to transfer 5 times to 5 different nodes before considered as failed)).