Time to time node suspension

Hello guys,

Because of some reason, one of the node getting suspended periodically:

Sometimes this become red:
image

I made a search by GET_AUDIT and failed

PS C:\Users\Storj D1> docker logs storagenodeD1.1 2>&1 | sls GET_AUDIT | sls failed

’ ’ ’
2023-06-13T10:49:37.905Z ERROR piecestore download failed {“process”: “storagenode”, “Piece ID”: “H26XHWW5Z6XKWI6VRCLKW
E36YZK7M3A3KJSEM5YENPU53GQRNBOA”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “Action”: “GET_
AUDIT”, “Remote Address”: “172.17.0.1:40942”, “error”: “trust: rpc: tcp connector failed: rpc: dial tcp 34.94.153.46:77
77: operation was canceled”, “errorVerbose”: “trust: rpc: tcp connector failed: rpc: dial tcp 34.94.153.46:7777: operat
ion was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}
2023-06-13T19:11:16.973Z ERROR piecestore download failed {“process”: “storagenode”, “Piece ID”: “PUUTCBD4D2LZBD2HC4UHP
TPPF4JENMZZ3N4WWCXSA56BCXEA5RUA”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Action”: “GET
_AUDIT”, “Remote Address”: “172.17.0.1:59024”, “error”: “trust: rpc: tcp connector failed: rpc: context canceled”, “err
orVerbose”: “trust: rpc: tcp connector failed: rpc: context canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext
.func1:190”}
’ ’ ’

Is it the problem of node or docker or windows? What shall I do?

Regards,
Alexander

These errors are normal and your audit score is still 100% so you didn’t fail audits.

The decreased suspension score is probably due to your low Online score and crashes or not properly shutdown nodes. There is already a thread here and if you have a look why your nodes have a lower online score and manage to keep them up it will recover fast.

But other 6 nodes on same machine works fine

Try to search the logfile for “FATAL”. Either there is some connection issue or node crashes.

Looks there are no FATAL’s
image

I’m not sure right now but I think after every crash the docker logfiles are gone too, how far back do you get in time? Did you restart your node or was there an issue, because of 0m Uptime?

Yep, what’s strange. I didn’t restarted node so far

Here is 50 lines of logs, looks there are some problems:

‘’’
PS C:\Users\Storj D1> docker logs --tail 50 storagenodeD1.1
2023-06-13T20:12:53.053Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.053Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.054Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.054Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.054Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.054Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.054Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.054Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.056Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.056Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.056Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.056Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.056Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.056Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.057Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.057Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.057Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.057Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.058Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.058Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.058Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.058Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.058Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.059Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.059Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.059Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.060Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.060Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.060Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.060Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.060Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.060Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.061Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.061Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.061Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.061Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.061Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.061Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.062Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.062Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KPMR54YFXBJ3UOSULEJXYUNPBTUGRXQ2WXCWZEA52R63DN4QVVJA”}
2023-06-13T20:12:53.062Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.062Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “7RVTRP4DOCALNQRWCOUURFCCJWHHMPN7B2UN6PJUPEXP7475OS4A”}
2023-06-13T20:12:53.063Z WARN collector file does not exist {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.063Z INFO collector deleted expired piece info from DB {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Piece ID”: “KBVTXPGNF6MB6FRHSJT7CWXLUN47ZL4UVHOVL2PYSUXGFKHKSR4A”}
2023-06-13T20:12:53.170Z INFO piecedeleter delete piece sent to trash {“process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Piece ID”: “PMYK5HESTVCQ2PM6B6BG7XA7CGT4B7YV5IWZNGMAY5LVPKHDH6KQ”}
2023-06-13T20:12:53.277Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”, “attempts”: 1, “error”: “ping satellite: check-in ratelimit: node rate limited by id”, “errorVerbose”: “ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-06-13T20:12:53.308Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 1, “error”: “ping satellite: check-in ratelimit: node rate limited by id”, “errorVerbose”: “ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-06-13T20:12:53.427Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “attempts”: 1, “error”: “ping satellite: check-in ratelimit: node rate limited by id”, “errorVerbose”: “ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2023-06-13T20:12:53.440Z INFO piecedeleter delete piece sent to trash {“process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Piece ID”: “JASVX6GZYPGG46QWSFZV7IW4W6O2DEJE2S7JGFUC4AQPG2YWFCFQ”}
2023-06-13T20:12:53.582Z INFO pieces:trash emptying trash started {“process”: “storagenode”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”}
‘’’

Yes, there is something wrong, but it is not the cause, it is only the consequences. Something must have gone wrong before that. Ideally, you have the last lines before the restart (“INFO Configuration loaded”)?

Do you mean I shall restart and immidiately check the log or what?

No, you said you did not restart, but in your screenshot you can see that there was one. So if it says 30 minutes online as an example, take a look at the log file at exactly this timeframe. Probably around 2023-06-13T19:50:00.000Z

Amm, there what file is located?

I think you should make sure you’re node is running, nothing more than that. It seems like a fresh restarted node, cleaning up all mess that left behind after being a time down. So you should be looking for the reason the node sometimes is offline: are you doing it on purpose, or is it just a killed process or something.

It was offline few days on last 2 weeks, but already about 1 week it is always online

try
docker logs --since 50m --until 45m storagenodeD1.1

Or whatever online time you have now +1 or 2 minutes

Your screenshot says otherwise, as it shows 0min running.

So for now it’s probably best to leave the node alone. And only act if there are still real errors (no “context cancelled”) in the log in an hour or something.

But the fact it has been offline for so long is just the reason for the suspension.

In my experience, often the most disturbing factor for a good running node is the operator of it. Besides, better is the enemy of good enough.

I have reboot system right now and during loading it gave such message
image

This is the drive which node give mistakes and checking is not required

But standard disk checking said what there are no mistakes

Suspension score is falling because of unknown (unexpected) error on audit request, not because online score is falling @revyte
They are different issues. The online score is falling, if no response on audit request at all, but suspension score is falling, when node does respond but with unexpected result, like

or

This is mean that node was unable to finish a transfer of small amount of data for audit and the cancel is happened on the node’ side. So, either the network is unstable on your end or you do have issues with the disk. I would recommend to check it and fix errors while node is stopped.

Try also to upgrade wsl and the Docker Desktop, may be it would improve things. Or maybe there are pending updates waiting for reboot, in this case try to apply them and reboot.

then the filesystem did not have a clean state after reboot. Let it finish the check. After that check for errors in logs again.

I didn’t mean the offline time per se, but that it can be an indication of crashes where the transmission is interrupted accordingly and there is an unexpected response as a result. As he said, he didn’t restart the node it was probably an unexpected shutdown.

1 Like

Thanks, problem has been fixed by disk checking

Regards,
Alexander

1 Like