Node got suspended on saltlake

Hello there,

today I got an E-Mail where its written, that my node was suspended on saltlake.

grafik

I instantly checked my Node Dashboard, but there is nothing written abou a suspension.

Is the mailing faster than the Dashboard?

Thanks in advance!
Dominik

Is the node ID from the email the same as on the dashboard?

Also try restarting the node, that may force a stat update. I’m the mean time you can always check the logs for errors. Look for lines with either GET_AUDIT or GET_REPAIR and ERROR.

4 Likes

Thanks for your quick reply.
Yes the node ID from the mail and the dashboard are the same.
I also restarted my node and there is still nothing written about the suspension on saltlake.

grafik

I have checked the logs from yesterday and there are quite some ERRORS with GET_REPAIR around the time the node got suspended. And those are all in touch with the saltlake satelite.

2022-01-26T19:47:50.069Z	INFO	orders.1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE	finished
2022-01-26T19:47:50.578Z	ERROR	piecestore	download failed	{"Piece ID": "HY726K6VNEQ5YLTXXWRUY5ZQJRSRNQDBX4LEHNGEXY3CYXFEOA3A", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET", "error": "write tcp 172.17.0.3:28967->184.104.224.99:33428: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->184.104.224.99:33428: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:48:11.430Z	ERROR	piecestore	download failed	{"Piece ID": "IR4X6UJGQ3RPXAJK3JT6OIMEI4AG6JBQKWV3M5ZAT453Z6ZPQXBQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.68.50:40212: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.68.50:40212: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:51:59.908Z	ERROR	piecestore	download failed	{"Piece ID": "ZPGC73W7PRIMQFC36C6XS44CZXKQETMT2PRHVTOGJUQYZC2R7AXQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->157.90.126.231:33146: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->157.90.126.231:33146: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:52:01.186Z	ERROR	piecestore	download failed	{"Piece ID": "O67ZC7B42B6Y42OQ6B77PNWM2V7JS5RFCVC4UU43OEXI426WG2AA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->78.47.138.30:35198: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->78.47.138.30:35198: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:53:50.628Z	ERROR	piecestore	download failed	{"Piece ID": "4O5TQWPHMMELAOCT2YMDUQ76DEVN3IHODX5D5TFJE5GGQI6HSWAQ", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->168.119.232.255:33480: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->168.119.232.255:33480: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:54:46.002Z	ERROR	piecestore	download failed	{"Piece ID": "JGOC7E4NHJY7W4SI5S6WVTDUZBGS25AHZZEPI37EZPGJJ3JHUFJA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET", "error": "write tcp 172.17.0.3:28967->184.104.224.99:40256: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->184.104.224.99:40256: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:54:53.992Z	ERROR	piecestore	download failed	{"Piece ID": "UYP2AM4ITO6DSZWRY7YLW7WTVXMDDAGMRFKTGHP5AHXPGJWURIIQ", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->78.47.138.30:41406: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->78.47.138.30:41406: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:54:55.268Z	ERROR	piecestore	download failed	{"Piece ID": "BZIMTH27EYKN4C6XY2PGANWOMHSNSTWQ3PLNG5L6QDCBUGTGGOBQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.68.50:36256: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.68.50:36256: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:54:55.910Z	ERROR	piecestore	download failed	{"Piece ID": "NERCLQ27YR2MJWTBZEEW4LE73GIJYSUJHGYTP4JMHBFZPCCSI7OQ", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->78.47.138.30:43944: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->78.47.138.30:43944: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:55:46.468Z	ERROR	piecestore	download failed	{"Piece ID": "POUJRFMZ2JXQFU2TXCYFNFY4IIBU5AEZN6KXKL75LBMWEA2V3ZZA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->116.203.192.199:60472: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->116.203.192.199:60472: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:57:13.508Z	ERROR	piecestore	download failed	{"Piece ID": "EOJC7RHXQDCD2FIXAZADQTBF55RSCW7FITCPEOCEBX6GX43VDF2Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.50.71:35176: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.50.71:35176: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:57:15.428Z	ERROR	piecestore	download failed	{"Piece ID": "H3ZDORQ2MW3L6SZVVHTWL7JGGOL426Z2BS3ALOBIHV6OBHX2K7XQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.50.71:37652: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.50.71:37652: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:58:14.485Z	ERROR	piecestore	download failed	{"Piece ID": "F3HVM4K5ZM6RRZFKVJQBAJ7VCZKV53QGKXDIP7CWR5TR3FECRLEA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET", "error": "write tcp 172.17.0.3:28967->184.104.224.99:41038: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->184.104.224.99:41038: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T19:59:02.559Z	ERROR	piecestore	download failed	{"Piece ID": "6AZPWQ5SMKBLGW5DKHWOOMKXUEE3HUJMZXZKCSH7W3BKKNUNZI6Q", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET", "error": "write tcp 172.17.0.3:28967->184.104.224.99:37832: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->184.104.224.99:37832: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:00:39.473Z	ERROR	piecestore	download failed	{"Piece ID": "HY726K6VNEQ5YLTXXWRUY5ZQJRSRNQDBX4LEHNGEXY3CYXFEOA3A", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET", "error": "write tcp 172.17.0.3:28967->184.104.224.99:34732: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->184.104.224.99:34732: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:01:25.668Z	ERROR	piecestore	download failed	{"Piece ID": "VOCA4UW2AYL3TBK7AZ2EJ5KUUOUVK4NKTAFHGDYU633FWZL7C5LQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.50.71:45998: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.50.71:45998: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:01:36.362Z	ERROR	piecestore	download failed	{"Piece ID": "4OMZTRTT7UDXXYUYV7DUWKD7DWNEBYAMK4AJH7KH5HTDMKY6E5SA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET", "error": "write tcp 172.17.0.3:28967->184.104.224.99:36302: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->184.104.224.99:36302: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:03:17.184Z	ERROR	piecestore	download failed	{"Piece ID": "JGOC7E4NHJY7W4SI5S6WVTDUZBGS25AHZZEPI37EZPGJJ3JHUFJA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET", "error": "write tcp 172.17.0.3:28967->184.104.224.99:34804: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->184.104.224.99:34804: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:07:09.348Z	ERROR	piecestore	download failed	{"Piece ID": "HDD4GXXR4DX55LMBEP7QRRSGIJEF5YLPMQXO2JLOMXLW76AAKCBA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.52.232:41376: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.52.232:41376: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:11:29.827Z	ERROR	piecestore	download failed	{"Piece ID": "P5NYNFOPK5TMNOSO4C2EETQTP4LA2VNGARYAC4QLB6TXA7QNROAA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->49.12.225.0:58342: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->49.12.225.0:58342: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:16:41.704Z	ERROR	piecestore	download failed	{"Piece ID": "HY726K6VNEQ5YLTXXWRUY5ZQJRSRNQDBX4LEHNGEXY3CYXFEOA3A", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET", "error": "write tcp 172.17.0.3:28967->184.104.224.99:36528: use of closed network connection", "errorVerbose": "write tcp 172.17.0.3:28967->184.104.224.99:36528: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:17:44.868Z	ERROR	piecestore	download failed	{"Piece ID": "EFQBI3LPNQDKIAEP2QIKZVFQA44JLKEOQXMAOSAJCMAMKDOYPIAA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.50.71:47264: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.50.71:47264: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:18:02.147Z	ERROR	piecestore	download failed	{"Piece ID": "2K3KT36DUSQPBO6BIYDSZD5LUMNTVAXA6AZ6UNLADACG2W273IVA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.52.232:41816: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.52.232:41816: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}
2022-01-26T20:24:02.467Z	ERROR	piecestore	download failed	{"Piece ID": "O3PTOVWR562DSH2LQH6JPB6WQTBCI3HMHQTDYPI2OJKJJZYFUUJA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR", "error": "write tcp 172.17.0.3:28967->5.161.50.71:46666: write: connection timed out", "errorVerbose": "write tcp 172.17.0.3:28967->5.161.50.71:46666: write: connection timed out\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:317\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:392\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:619\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22"}

I have removed the INFO entrys caused by the character limitations.

Maybe there was a problem with my ISP.

~30min after I got the suspension mail, the log seems to be normal again and I also get pieces from saltlake again.

2022-01-26T20:51:56.764Z	INFO	piecestore	download canceled	{"Piece ID": "MUQOJ2A22SQ6CRTIWP7ILWVVVSDK3DEO5A3672O5UPWZHAXL6BSA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-01-26T20:51:58.840Z	INFO	piecestore	downloaded	{"Piece ID": "T2ZKZWOZSQKU3W72QIKXRBU7L6Z4WVVUUUOPOIBKQEWJRANCQ2AA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-01-26T20:52:14.320Z	INFO	piecestore	download started	{"Piece ID": "K4NRNNHRD2ZOEP6LUT63MUUPU5E3L4ELGCLVGUFJZ4LPO77W3BFA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-01-26T20:52:14.441Z	INFO	piecestore	downloaded	{"Piece ID": "FOZFW3ZRXOCARVXLLOO726RCOV5J4E7NPY4C5HSOV5GZO4UBVYZQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR"}
2022-01-26T20:52:24.127Z	INFO	piecestore	downloaded	{"Piece ID": "K4NRNNHRD2ZOEP6LUT63MUUPU5E3L4ELGCLVGUFJZ4LPO77W3BFA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-01-26T20:53:14.994Z	INFO	piecestore	download started	{"Piece ID": "F3HVM4K5ZM6RRZFKVJQBAJ7VCZKV53QGKXDIP7CWR5TR3FECRLEA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-01-26T20:53:18.659Z	INFO	piecestore	downloaded	{"Piece ID": "F3HVM4K5ZM6RRZFKVJQBAJ7VCZKV53QGKXDIP7CWR5TR3FECRLEA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}
2022-01-26T20:53:30.964Z	INFO	piecestore	download started	{"Piece ID": "7Z6DPRLEQXADEVUM3UIPJ5PNTQ44TWCEZFNWYB5SINC2WDATVXDQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET"}

I will have a look about the logs in the coming days and hopefully it was just a limited issue.

Dominik

After I came home today I checked the dashboard again. And now there are warnings.

Does the suspension score go up again, when the issue is fixed and the node starts operating normal?

Dominik

Yes it does, but with your previous post it looked like it had mostly recovered already. However now the scores are lower again, which suggest you may still be having issues. It looks like connection problems. Maybe your router is having trouble handling the load. Keep an eye on those logs, those errors should never appear on GET_AUDIT or GET_REPAIR requests. They can happen on other transfers, just not on these since the audit and repair workers use a very long time out.

You could try a router restart, that sometimes helps.

2 Likes

I have checked the logs again. The suspension score gone back to 100% on all nodes. So seems like the issue has been erased. Thans alot for your help @BrightSilence.

2 Likes

Hi,

I am having a variation of this error: I am getting frequent (152 times in the last 3 weeks) “contact:service ping satellite failed” errors. This seems to affect exclusively satellite 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE. When this error occurs, my QUIC status shows up as “Misconfigured”, though the node seems to continue to work normally. The problem goes away by itself after some time. Please find log entries of the last occurrence below:

2023-01-28T19:35:29.537Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.1.12:38673->10.103.0.10:53: i/o timeout", "errorVerbose": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.1.12:38673->10.103.0.10:53: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:35:41.050Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 2, "error": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.0.14:45278->10.103.0.10:53: i/o timeout", "errorVerbose": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.0.14:45278->10.103.0.10:53: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:35:51.596Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 3, "error": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: server misbehaving", "errorVerbose": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: server misbehaving\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:36:04.139Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 4, "error": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: server misbehaving", "errorVerbose": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: server misbehaving\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:36:12.659Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 5, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:36:39.168Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 6, "error": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.2.13:56231->10.103.0.10:53: i/o timeout", "errorVerbose": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.2.13:56231->10.103.0.10:53: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:37:11.638Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 7, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:38:26.130Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 8, "error": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.1.12:33922->10.103.0.10:53: i/o timeout", "errorVerbose": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.1.12:33922->10.103.0.10:53: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:40:34.607Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 9, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:44:51.077Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 10, "error": "ping satellite: check-in ratelimit: node rate limited by id", "errorVerbose": "ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-01-28T19:53:33.603Z        ERROR   contact:service ping satellite failed   {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 11, "error": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.1.12:33765->10.103.0.10:53: i/o timeout", "errorVerbose": "ping satellite: check-in network: failed to resolve IP from address: marstorage.synology.me:28967, err: lookup marstorage.synology.me on 10.103.0.10:53: read udp 10.101.1.12:33765->10.103.0.10:53: i/o timeout\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:139\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:100\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:160\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

The root cause seems to be my domain not resolving. My doubt here is, where does this problem occur: on my mode or in the satellite?

I have verified that, when the problem occurs, I can resolve my domain without any issue.

How do I avoid these errors? Should I switch my DDNS provider (currently Synology)? If so, can anybody recommend me a good free one?

Brgds, Marc

This is Saltlake satellite, so moved out of ap1.storj.io issue.

On the satellite - it sometimes cannot resolve the provided domain. But the reason could be misconfigured DDNS provider.

The proven recommendations for DDNS providers are the same:

Thanks, I will set up an alternative DDNS provider and try with that. Really funny that this only affects one satellite.

Brgds, Marc

1 Like

Maybe your node stores more data from the customers of this satellite, so audits happened more often for that data.