Hello
One of my big nodes updated to 1.67.1 2hr ago. The filewalker still running. And my Suspension dropped to 95.71 % on saltlake. It’s a 12tb wd gold drive. No data lost, the drive works perfectly. Node is online for almost 2years. Suspension rate was aways at 100%. Until the latest update. Nothing else changed.
my first node also updated to 1.671 and seems to have issues. The node is pretty new and logfiles shows constant upload/download message wo issues.
Diskspace used and suspension seem to drop to zero since update.
comparing it with a very new node not upgraded yet on the same network
Just saw Current Month Earnings in Node V1.67.1 - #7 by BrightSilence which describes the similar issue and also refers to a pull request to fix.
This is a different issue. Your node have an issue with online score not suspension score and the reasons are different.
You can check when your node were offline with these scripts:
@daniel1 please check your logs for reason of drop in suspension score:
https://support.storj.io/hc/en-us/articles/360042257912-Suspension-mode
will recheck as soon as back at the system.
The point was more like seeing issues just after update to 1.67.1. As networking is the same as for other nodes i am running and none other is showing an issue, I was/am assuming an issue related to the new version.
But as said I will double check this evening.
it looks that i have also some several problems
| 2022-11-18T09:32:28.524+0200 | ERROR | piecestore | download failed | {Piece ID: FWCWUZSXHZYH3N2UJ3U542EJJW6VFHMAK2BAZVV6QBYTLSWXF25Q, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET, error: write tcp 192.168.88.242:56000->82.165.184.180:38980: use of closed network connection, errorVerbose: write tcp 192.168.88.242:56000->82.165.184.180:38980: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:326\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:628\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22} |
|---|---|---|---|---|
| 2022-11-18T09:46:18.462+0200 | ERROR | piecestore | download failed | {Piece ID: UPCN5LSGNQI53YGIQV65WKNYJK73QV5WJ4FT6K7VNFK56SE3Y4FQ, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET, error: write tcp 192.168.88.242:56000->82.165.184.180:36896: use of closed network connection, errorVerbose: write tcp 192.168.88.242:56000->82.165.184.180:36896: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:326\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:628\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22} |
| 2022-11-18T10:48:13.386+0200 | ERROR | piecestore | download failed | {Piece ID: 3MKN5XVW3GVRRHUGXPPCGOSIMVVW2ZFIAANQBEPHEHBOAP2QB7AA, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET, error: write tcp 192.168.88.242:56000->82.165.184.180:57736: wsasend: An existing connection was forcibly closed by the remote host., errorVerbose: write tcp 192.168.88.242:56000->82.165.184.180:57736: wsasend: An existing connection was forcibly closed by the remote host.\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:326\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:628\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22} |
| 2022-11-18T10:51:56.065+0200 | ERROR | piecestore | download failed | {Piece ID: ZCPDAYY3GEIHD7P5UTQ3ZECED56T2M2T4I6L5JWORGJ6KJXZ3L4A, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET, error: use of closed network connection, errorVerbose: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:326\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:628\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22} |
| 2022-11-18T10:57:14.015+0200 | ERROR | piecestore | download failed | {Piece ID: 6XYMI24UGYPY3SOPPGNW25Z7FAZ67USNGQP7ND5QZX6XYANQL3FQ, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET, error: write tcp 192.168.88.242:56000->82.165.184.180:25450: use of closed network connection, errorVerbose: write tcp 192.168.88.242:56000->82.165.184.180:25450: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:326\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:628\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22} |
| 2022-11-18T11:15:16.308+0200 | ERROR | piecestore | download failed | {Piece ID: NVA7DP4VFGS4NFJKCSD4QMB3VAITLDTLUW22FHTMCA34OKHEBZYA, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET, error: write tcp 192.168.88.242:56000->82.165.184.180:25646: use of closed network connection, errorVerbose: write tcp 192.168.88.242:56000->82.165.184.180:25646: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:326\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:628\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22} |
| 2022-11-18T11:26:18.668+0200 | ERROR | piecestore | download failed | {Piece ID: GMAIGJCQR62HOCUKE42WTRI7L75HVQTJTCHCUFYRD55JIVT7M52Q, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET, error: write tcp 192.168.88.242:56000->82.165.184.180:17860: use of closed network connection, errorVerbose: write tcp 192.168.88.242:56000->82.165.184.180:17860: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:326\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:628\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22} |
| 2022-11-18T12:12:47.564+0200 | ERROR | piecestore | download failed | {Piece ID: WHGZDAK4IWQQDJQH47YDNWOOUP63LQV7EF7IO22RELOBJZPJQGGA, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, Action: GET, error: write tcp 192.168.88.242:56000->82.165.184.180:15306: use of closed network connection, errorVerbose: write tcp 192.168.88.242:56000->82.165.184.180:15306: use of closed network connection\n\tstorj.io/drpc/drpcstream.(*Stream).rawWriteLocked:326\n\tstorj.io/drpc/drpcstream.(*Stream).MsgSend:404\n\tstorj.io/common/pb.(*drpcPiecestore_DownloadStream).Send:317\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Download.func5.1:628\n\tstorj.io/common/rpc/rpctimeout.Run.func1:22} |
I have them on several nodes.
Online status also droped
This is a usual “context cancelled” error - your nodes were not fast enough.
I did a docker “logs storagenode 2>&1 | grep blobs | tail” right after i saw the suspension %, and it showed nothing. I checked my node right now, and it went back to 100%. Maybe it dropped because the HDD was too busy checking the data after the update ? Atm. only the “Current Month Earnings” seems way off.
I checked with “docker logs storagenode 2>&1 | grep -E “GET_AUDIT|GET_REPAIR” | grep failed” it also shows nothing.
since you didn’t redirect logs to the file, the previous logs are lost with docker rm storagenode, so you likely will not see a previous errors.
The score dropped right after the update. And i didn’t rm. since then, the errors shouldn’t be there, if it happened right after the update ? Also should i worry about it ? It went back up to 100%.
The suspension email will arrive after the successful check-in on the satellite due this bug: [storagenode] The email about suspension is coming only after check-in to satellite · Issue #4235 · storj/storj · GitHub
So the suspension reason should be way before this email. If you have had logs before the date -please search there.
Checked again - diskspace used is now back to normal and online % is slowly recovering.
So looks all fine again. So for me closed currently.
I checked the logs again, only a few “ERROR piecestore download failed”. I didnt received any email. Suspension Score went back to 100 %, also im getting ingress/egrerss. If im not checking on it yesterday, i didnt even notice the drop. So i dont know what it was, but i dont think i can do more. It just strange that it happend right after the update, while the disk was busy.
The update just restarted the node and forced it to do a check-in on the satellites, during check-in it received update for scores from the satellites.
Makes sense. I will keep an eye on it, if it drops again, and check the logs. But how should we notice if there is an error and the score recovers so fast ?
Offtopic… what “fixed linter” means? I keep see it in the new release notes.
Ok, i restarted my node, and it happend again. I checked the log, no errors, but:
2022-11-18T16:40:12.246Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "SB2EIHJJGR3YNZXZKVMIQXJ5GGMC4ORWKNKYGYFVLMUGAV7USAKA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "GET_REPAIR"}
2022-11-18T16:40:12.246Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "FMVJ77FG5EFP4SSJB7DZUM7P3BRF3ET6PP5YDBIAE7XKKCEDUPLA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-11-18T16:40:12.246Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "AK2VJEDF2ERX4OJ5JVP6PUHMNS6VG5ISDBMSMK6QPMCD7HW4QPEQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-11-18T16:40:12.407Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "IVYWUC7QBUGXNCSNLSFTRSNIQ3YR4TRHF66R5UQI6RFY5QNWOW2A", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:40:27.063Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "NYZYV5RR4SG32DHNCDR2KZ4ZALIZVO4V3N6WPYSP72GJWKP6NLSQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-11-18T16:40:31.433Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "ATZR5PBZITMYG3KSAAPCEIY4HL4M7W7ATJ7ZSQCWG75AXI5EZEVA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-11-18T16:40:34.269Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "C7ITXAKOUUW4R6WRHFXAJWEM64SZVSPF5O7MCFJGWCC27JKQKACQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR"}
2022-11-18T16:40:40.983Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "NOOJVMM7QAWQNNAQARVZIJ4GL5HLQ5AVDDQI6Q37UO3S6NCA27TQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-11-18T16:40:49.444Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "3IWICSZJGYC7H57JWOCSBEVJCKBQSFAFYDS7JHVNZVUZ7QMZKCFQ", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:40:59.680Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "5INTDKDSTCGPTKG7YAL4UII7GRXWZGLBPUWDGPZWIU6INKIG5PDQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR"}
2022-11-18T16:41:09.961Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "SHJLMZORZJRUJ2OEJ5LSSCCWOGXNP25GOXIEUBHPVS4GX7RCC4CQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-11-18T16:41:44.499Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "AQNADLISRZTMP2HHRZG44P2VCCEEJXHX4TY7FR6JC7NEKIRZXTTQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-11-18T16:41:50.174Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "FWIEX6VR6BWLVS4L6XY44PXZCJP3CROPYEEUXUPQNKQRIUTNLZ5A", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:41:54.967Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "HV4ES36O2M6Z4FYBMHS23XHVKL2U4FNJZ3KV2HVYLN6XFAQMPSJA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR"}
2022-11-18T16:42:01.547Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "GGCD42K44GV6T6DO3P5R7CQPMQTCYCVXHJJQSUYYEF24MKJXYOOA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR"}
2022-11-18T16:42:08.799Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "4FAQJT3XL5ZAHGXMEJ555ATB7MLCQXRCNWM2JOGXIC4AKIJLIN6A", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:42:13.595Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "NHS5QCQRO2IDA3D2ZD7HGXE4JLQBC7TEJ5WZX3LGNPVI5VUXKSBQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR"}
2022-11-18T16:42:14.117Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "SXQBZHDRJV7GHHHE54PPG47YZ6FQRDHZG4Z5SSZS6PFD5XMHCK7A", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-11-18T16:42:30.194Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "SKPSHIYNIERXXIAU766TAYXEESXKQKM4OOJS4ZLF626XXRYK3K6Q", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:42:31.150Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "LVYC3RMMFACM635HIGOOSKVPMRCACTLKDW32PCMGB42CXXXE6DNA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:42:31.535Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "TC6TNOJ2IBRUAKFMBMKVEDT3FT4R44XMAIRXMVOY4IWI4ZZBSUNQ", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR"}
2022-11-18T16:42:36.399Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "QOXD2LSC2UEZ3QCSKI4UVPIXFTG5ZVZOJMPXDG5LFIZOBRJSI6HA", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Action": "GET_REPAIR"}
2022-11-18T16:42:37.194Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "O5XAWFIED6WSI2WVMWRY7HQBGTIFYEZMI3NZWHWV5I3KCBF344IA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:42:44.085Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "UZDTQPAUY74DJ2EJKPL7LZUQ5C2ASW2ZO73LPFPZ7IJGWL7HJWHA", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-11-18T16:42:51.516Z INFO piecestore download started {"Process": "storagenode", "Piece ID": "TE5TOC5HSA6SHA4BDEKS2OD3WXHJTGVSOVZMXTLC2JIOURAROU5Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-11-18T16:42:56.891Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "IVYWUC7QBUGXNCSNLSFTRSNIQ3YR4TRHF66R5UQI6RFY5QNWOW2A", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:42:56.933Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "SXQBZHDRJV7GHHHE54PPG47YZ6FQRDHZG4Z5SSZS6PFD5XMHCK7A", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
2022-11-18T16:42:56.933Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "TE5TOC5HSA6SHA4BDEKS2OD3WXHJTGVSOVZMXTLC2JIOURAROU5Q", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-11-18T16:42:57.077Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "O5XAWFIED6WSI2WVMWRY7HQBGTIFYEZMI3NZWHWV5I3KCBF344IA", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:42:57.126Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "FWIEX6VR6BWLVS4L6XY44PXZCJP3CROPYEEUXUPQNKQRIUTNLZ5A", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "Action": "GET_REPAIR"}
2022-11-18T16:42:57.431Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "NYZYV5RR4SG32DHNCDR2KZ4ZALIZVO4V3N6WPYSP72GJWKP6NLSQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "GET_REPAIR"}
2022-11-18T16:42:57.530Z INFO piecestore downloaded {"Process": "storagenode", "Piece ID": "NOOJVMM7QAWQNNAQARVZIJ4GL5HLQ5AVDDQI6Q37UO3S6NCA27TQ", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Action": "GET_REPAIR"}
It looks like after the restart it took 44 second to complete the "GET_REPAIR"s. Could be this the cause for the drop ?
I get the same drops on my nodes when update happens, but I just ignore them, and in time the stats are back to 99,xx%. Don’t give to much attention. I have one node with online 93% on 1 test satellite, since 1 month ago. The data dosen’t come in anymore from test sats to keep the score up-to-date.

