Graceful exit status

How can be possible to undestand the graceful exit status of a node ?

I already run the exit-status command but I cant undestand if the graceful exit is started and how long it takes to complete, this is the status after about 2 days:

After the new procedure has been implemented for the Graceful Exit (your node will not transfer pieces to other nodes anymore, it just need to be online for the next 30 days and keep the online score above 80%).

So now the status either 0% or 100% or disqualified (if the online score fall below 80% during GE).

Just check the status after 30 days.

It’s been several days and my graceful exit for my node shows an exit status progress of zero percent done.

I’m seeing heavy disk activity in system monitor for E: but I have no verbose logs showing what the graceful exit routine is actually doing so I’m assuming it’s broken.

If Graceful Exit is too inconvenient and eating too much electricity with electric rates soaring past 14 cents in my area then I would rather not wait any longer.

Is this behaviour of several days of no progress normal? Should I keep waiting or should I just format E: and call it a day?

Progress of 0% is normal for an entire month before it turns to 100% at the very end. No % updates during that entire time.

I’m sure your electrical cost on that hdd would be less than 85 cents for the entire period.

My 2 cents,
Julio

GL

1 Like

You need to just keep your node online for the 30 days after you call the GE. Then it would be either 100% and success or disqualified (if the node managed to have an online score less than 80% during the process), see

I noticed that the saltlake node status simply disappeared after a couple week with no status. Is this expected?

docker exec -it storagenode3 /app/config/bin/storagenode exit-status --config-dir /app/config --identity-dir /app/identity
2024-10-28T13:18:44Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2024-10-28T13:18:44Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2024-10-28T13:18:44Z INFO Identity loaded. {“Process”: “storagenode”, “Node ID”: “12v3RnX1NNFq59voqEgQ5G9DhgDpoCEgmkLLL8aGZWWwVhnqY9E”}

Domain Name Node ID Percent Complete Successful Completion Receipt
saltlake.tardigrade.io:7777 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE 0.00% N N/A
ap1.storj.io:7777 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 0.00% N N/A
us1.storj.io:7777 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S 0.00% N N/A
eu1.storj.io:7777 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs 0.00% N N/A

Now:

docker exec -it storagenode3 /app/config/bin/storagenode exit-status --config-dir /app/config --identity-dir /app/identity
2024-11-11T20:46:07Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2024-11-11T20:46:07Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2024-11-11T20:46:07Z INFO Identity loaded. {“Process”: “storagenode”, “Node ID”: “12v3RnX1NNFq59voqEgQ5G9DhgDpoCEgmkLLL8aGZWWwVhnqY9E”}

Domain Name Node ID Percent Complete Successful Completion Receipt
ap1.storj.io:7777 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 0.00% N N/A
us1.storj.io:7777 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S 0.00% N N/A
eu1.storj.io:7777 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs 0.00% N N/A

It could be, if you deleted the databases.
The fix is simple - run it again.

I did not delete anything. The same node has been running since I began the Graceful Exit several weeks ago. I though maybe because the status percentage is not in use that it maybe just drops the satellite from the response.

Ok, I cannot reproduce it (because I do not want to GE my prod nodes), but I shared your observation with the team.

By the way, did you try to run it again? Did it help?

Not sure what you mean by ‘run it again’.
I started the process against one node with the four satellites.
I assume the other three satellites are in the process of GE, I don’t want to interrupt them if it means starting all over. I am not sure how I can see whether the process is active or the progress for each satellite on this one node.

If the node is missing the status of GE for the particular satellite in its databases, you can try to call the GE one more time for this particular satellite (they are independent of each other). This would force trigger the status update from this satellite.

1 Like

Seems that my node did complete a successful GE, but the logs look worrying, is this a problem, current version v1.116.7

docker exec -it storagenode3 /app/config/bin/storagenode exit-status --config-dir /app/config --identity-dir /app/identity
2024-11-23T14:12:45Z INFO Configuration loaded {“Process”: “storagenode”, “Location”: “/app/config/config.yaml”}
2024-11-23T14:12:45Z INFO Anonymized tracing enabled {“Process”: “storagenode”}
2024-11-23T14:12:45Z INFO Identity loaded. {“Process”: “storagenode”, “Node ID”: “12v3RnX1NNFq59voqEgQ5G9DhgDpoCEgmkLLL8aGZWWwVhnqY9E”}

Domain Name Node ID Percent Complete Successful Completion Receipt
ap1.storj.io:7777 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6 100.00% Y 0a473045022100d25cbe8fac7423dd819402832ecf28b595db644b90c1071935074bc002dc4af30220740c9b5a2ef997e92a80faa3e611e8b76da600fcebb0e8319775e5e4629e5ced122084a74c2cd43c5ba76535e1f42f5df7c287ed68d33522782f4afabfdb400000001a20fc2254bc036b436ae889859ae3219352aad45f04b2c2ed450d09e08000000000220c0890bc83ba0610d79deea803
us1.storj.io:7777 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S 100.00% Y 0a4830460221008187e4825e861e1634bde319323708954259e48482480bcc9e343ef6a5ed1c0e0221008470e26e3ef33a70b96a2e20313d76e0b70475f7e678e6759a0c93d200b668811220a28b4f04e10bae85d67f4c6cb82bf8d4c0f0f47a8ea72627524deb6ec00000001a20fc2254bc036b436ae889859ae3219352aad45f04b2c2ed450d09e08000000000220c0890bc83ba0610f294bdec01
eu1.storj.io:7777 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs 100.00% Y 0a46304402200973764f06730ef3f3cdef7befe3d89e3c1082d7ba92b79ef94239546ee85d8502202ac28d6030db7e0f618415a8109aaf0c3a205303671879e8eae56d80610612851220af2c42003efc826ab4361f73f9d890942146fe0ebe806786f8e71908000000001a20fc2254bc036b436ae889859ae3219352aad45f04b2c2ed450d09e08000000000220c0890bc83ba0610ed9394b002
saltlake.tardigrade.io:7777 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE 100.00% Y 0a47304502207dbab2066befa81e398992f0238be96b8795d5ad521e9f546e069cf79f86fcaa022100eb555b4934f992049d76694dabfdb7d249f4066584e47e9deb376c7c6bb15d7912207b2de9d72c2e935f1918c058caaf8ed00f0581639008707317ff1bd0000000001a20fc2254bc036b436ae889859ae3219352aad45f04b2c2ed450d09e08000000000220c0890bc83ba0610e98cc4a202

docker logs --tail 25 storagenode3

2024-11-23T14:12:01Z INFO lazyfilewalker.used-space-filewalker subprocess exited with status {“Process”: “storagenode”, “satelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “status”: 1, “error”: “exit status 1”}

2024-11-23T14:12:01Z ERROR pieces used-space-filewalker failed {“Process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Lazy File Walker”: true, “error”: “lazyfilewalker: exit status 1”, “errorVerbose”: “lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:134\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:742\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

2024-11-23T14:12:05Z ERROR pieces used-space-filewalker failed {“Process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “Lazy File Walker”: false, “error”: “filewalker: open config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/bt: input/output error”, “errorVerbose”: “filewalker: open config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/bt: input/output error\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:78\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:129\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:751\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

2024-11-23T14:12:05Z ERROR piecestore:cache encountered error while computing space used by satellite {“Process”: “storagenode”, “error”: “filewalker: open config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/bt: input/output error”, “errorVerbose”: “filewalker: open config/storage/blobs/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa/bt: input/output error\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:78\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:129\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:751\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”, “SatelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}

2024-11-23T14:12:05Z INFO pieces used-space-filewalker started {“Process”: “storagenode”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”}

2024-11-23T14:12:05Z INFO lazyfilewalker.used-space-filewalker starting subprocess {“Process”: “storagenode”, “satelliteID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”}

2024-11-23T14:12:05Z INFO lazyfilewalker.used-space-filewalker subprocess started {“Process”: “storagenode”, “satelliteID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”}

2024-11-23T14:12:05Z INFO lazyfilewalker.used-space-filewalker.subprocess Database started {“Process”: “storagenode”, “satelliteID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Process”: “storagenode”}

2024-11-23T14:12:10Z INFO lazyfilewalker.used-space-filewalker subprocess exited with status {“Process”: “storagenode”, “satelliteID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “status”: 1, “error”: “exit status 1”}

2024-11-23T14:12:10Z ERROR pieces used-space-filewalker failed {“Process”: “storagenode”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Lazy File Walker”: true, “error”: “lazyfilewalker: exit status 1”, “errorVerbose”: “lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:134\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:742\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

2024-11-23T14:12:14Z ERROR pieces used-space-filewalker failed {“Process”: “storagenode”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “Lazy File Walker”: false, “error”: “filewalker: open config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/yz: input/output error”, “errorVerbose”: “filewalker: open config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/yz: input/output error\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:78\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:129\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:751\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

2024-11-23T14:12:14Z ERROR piecestore:cache encountered error while computing space used by satellite {“Process”: “storagenode”, “error”: “filewalker: open config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/yz: input/output error”, “errorVerbose”: “filewalker: open config/storage/blobs/6r2fgwqz3manwt4aogq343bfkh2n5vvg4ohqqgggrrunaaaaaaaa/yz: input/output error\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:78\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:129\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:751\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”, “SatelliteID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”}

2024-11-23T14:12:14Z INFO pieces used-space-filewalker started {“Process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}

2024-11-23T14:12:14Z INFO lazyfilewalker.used-space-filewalker starting subprocess {“Process”: “storagenode”, “satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}

2024-11-23T14:12:14Z INFO lazyfilewalker.used-space-filewalker subprocess started {“Process”: “storagenode”, “satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}

2024-11-23T14:12:14Z INFO lazyfilewalker.used-space-filewalker.subprocess Database started {“Process”: “storagenode”, “satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Process”: “storagenode”}

2024-11-23T14:12:17Z INFO lazyfilewalker.used-space-filewalker subprocess exited with status {“Process”: “storagenode”, “satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “status”: 1, “error”: “exit status 1”}

2024-11-23T14:12:17Z ERROR pieces used-space-filewalker failed {“Process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Lazy File Walker”: true, “error”: “lazyfilewalker: exit status 1”, “errorVerbose”: “lazyfilewalker: exit status 1\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:134\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:742\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

2024-11-23T14:12:21Z ERROR pieces used-space-filewalker failed {“Process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “Lazy File Walker”: false, “error”: “filewalker: open config/storage/blobs/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa/xy: input/output error”, “errorVerbose”: “filewalker: open config/storage/blobs/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa/xy: input/output error\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:78\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:129\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:751\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

2024-11-23T14:12:21Z ERROR piecestore:cache encountered error while computing space used by satellite {“Process”: “storagenode”, “error”: “filewalker: open config/storage/blobs/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa/xy: input/output error”, “errorVerbose”: “filewalker: open config/storage/blobs/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa/xy: input/output error\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:78\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatelliteWithWalkFunc:129\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:83\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkAndComputeSpaceUsedBySatellite:751\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”, “SatelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}

2024-11-23T14:12:31Z ERROR db failed to stat blob in trash {“Process”: “storagenode”, “namespace”: “ey3p1ywuk18ZGMBYyq+O0A8FgWOQCHBzF/8b0AAAAAA=”, “key”: “VVoZysw2WV40oxysxVFO6+12kegYp2i50fvtnZz+6OI=”, “error”: “unrecoverable error accessing data on the storage file system (path=config/storage/trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-08-29/kv/nbtswmgzmv4nfddswmkuko5pwxnepidctwroor7pwz3hh65dra.sj1; error=lstat config/storage/trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-08-29/kv/nbtswmgzmv4nfddswmkuko5pwxnepidctwroor7pwz3hh65dra.sj1: input/output error). This is most likely due to disk bad sectors or a corrupted file system. Check your disk for bad sectors and integrity”}

2024-11-23T14:12:57Z ERROR db failed to stat blob in trash {“Process”: “storagenode”, “namespace”: “ey3p1ywuk18ZGMBYyq+O0A8FgWOQCHBzF/8b0AAAAAA=”, “key”: “WCl+Thc48/n/dCZXbUoZl/kciABLO+Ult1JxD7OjNdc=”, “error”: “unrecoverable error accessing data on the storage file system (path=config/storage/trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-08-29/la/ux4tqxhdz7t73uezlw2sqzs74rzcaajm56kjnxkjyq7m5dgxlq.sj1; error=lstat config/storage/trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-08-29/la/ux4tqxhdz7t73uezlw2sqzs74rzcaajm56kjnxkjyq7m5dgxlq.sj1: input/output error). This is most likely due to disk bad sectors or a corrupted file system. Check your disk for bad sectors and integrity”}

2024-11-23T14:13:05Z WARN console:service unable to get Satellite URL {“Process”: “storagenode”, “Satellite ID”: “12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB”, “error”: “console: trust: satellite is untrusted”, “errorVerbose”: “console: trust: satellite is untrusted\n\tstorj.io/storj/storagenode/trust.init:29\n\truntime.doInit1:7176\n\truntime.doInit:7143\n\truntime.main:253”}

2024-11-23T14:13:05Z WARN console:service unable to get Satellite URL {“Process”: “storagenode”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”, “error”: “console: trust: satellite is untrusted”, “errorVerbose”: “console: trust: satellite is untrusted\n\tstorj.io/storj/storagenode/trust.init:29\n\truntime.doInit1:7176\n\truntime.doInit:7143\n\truntime.main:253”}

2024-11-23T14:13:18Z ERROR db failed to stat blob in trash {“Process”: “storagenode”, “namespace”: “ey3p1ywuk18ZGMBYyq+O0A8FgWOQCHBzF/8b0AAAAAA=”, “key”: “WJGd9YIFR0KZE+CUpQ0oDJ3cSSBFUWEVUYL4BEri/aQ=”, “error”: "unrecoverable error accessing data on the storage file system (path=config/storage/trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-08-29/lc/iz35mcavdufgit4ckkkdjibso5ysjaiviwcfkrql4aisxc7wsa.sj1; error=lstat config/storage/trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-08-29/lc/iz35mcavdufgit4ckkkdjibso5ysjaiviwcfkrql4aisxc7wsa.sj1: input/output error). This is most likely due to disk bad sectors or a corrupted file system. C

Dashboard shows online but last contact does not refresh to current, is this due to how GE reacts?

This shouldn’t be related to a GE, it’s a usual problem when the disk cannot keep up and the lazy filewalker is failed to get even small amount of IOPS, so it’s likely was get a timeout somewhere earlier.
The solution for that is known - disable the lazy mode and enable the badger cache, save the config and restart the node.

However, since you exited from all satellites, it’s not needed for you. You can delete this identity and its data from the disk. I would only suggest to keep your identity for a while, on case if you would decide to switch to zkSync Era to receive a final payout, if it would be lower than a Minimum Payout Threshold on L1 (Ethereum) next month.

Yes, then the online score will slowly drop to zero.

1 Like