Disk usage discrepancy?

Thanks. There are no errors in the log. I tried search for anything related to filewalker or retain, but there is nothing.

I think there is also an error in the logs, the filewalker logs are not displayed. They should have fixed a lot of things with the next updates

Hello @Miklas,
Welcome to the forum!

It’s likely a problem on your side, if you have errors related to any filewalkers and/or databases. You need to fix them.
See

They are displayed. But seems not for all modes (lazy=on, lazy=off).

Thank you. Does it mean it is only a local problem affecting the dashboard and the satellites have the correct data for payouts?

Yes. But it’s still a problem though.
Please check your logs and try to fix all possible issues.

Please fix YOUR errors first.

In that case it should match, unless you have more pieces, than currently support by a Bloom Filter size.
This should be fixed in coming upgrades.

What?! I never saw this, could you please quote from your logs?

Ok. If you want to add my node to the first deployment targets, here’s the nodeID with the issue: 12pGGTdu93C8hsFvw9cBrwhuHYNrNwEmQUUXbcEZSVZukmNhRWD
Thanks.

This is unhelpful, unfortunately, because this information is absent on the satellites side.
You need to check your logs and confirm, that ALL filewalkers are finished successfully for every trusted satellite and you do not have any FATAL errors in your logs (before 1.101.x version).

1 Like

They have the data they use for payment. We don’t know if correct or not… :wink:

Running sls “gc-filewalker” “C:\Program Files\Storj\Storage Node\storagenode.log” | sls “started|finished” it seems over the last few days this has started and completed on 4 different Satellite ID’s. Also according to the logs this was last done in October of last year. This lines up fairly closely with when my disk usage discrepancies started accruing.

I am seeing a lot of errors when i run sls “retain” “C:\Program Files\Storj\Storage Node\storagenode.log” | sls “error|failed” | select -Last 10

C:\Program Files\Storj\Storage Node\storagenode.log:7387014:2024-04-14T11:06:13-05:00 WARN retain failed to delete piece {“Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “Piece ID”:
“WBFN7S45UUHQVLL4GYR6FMSEP2JYZWSWZCVHKB4YEUYREWR7O7YA”, “error”: “pieces error: filestore error: file does not exist”, “errorVerbose”: “pieces error: filestore error: file does not exist\n\tstorj.io/storj/storagenode/blobstore/files
tore.(*blobStore).Stat:114\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).pieceSizes:248\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).Trash:293\n\tstorj.io/storj/storagenode/pieces.(*Store).Trash:405\n\tstorj.io/
storj/storagenode/retain.(*Service).trash:373\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:341\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:221\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}

1 Like

I’m getting this too as well now.

It’s a warning you can safely ignore it.

1 Like

It’s provided by nodes (they send orders, signed by the client and by the node), so if some orders are not submitted, they will not be paid. So yes, satellites have data for payouts and its correctness is cryptographically checked.

Now that it appears the lazy filewalker has successfully run, is there a typical time frame i should expect to have to wait before seeing the difference in disk usage come back to a closer alignment? The last lazy that completed was 3 days ago and i am still at about 4 TB difference.

Depends on which one from 5.
See

All 5 must finish their work for every trusted satellite without errors. In that case your databases should be updated and the dashboard should show the actual state of what’s node seeing on the moment of the databases update.
Unfortunately, if filewalkers tooks days, this info could be outdated too.
For honestly I think it will be always outdated because of how long they runs on most nodes in this thread…

Storage comes and goes, but overall I’ve spent the last couple of months getting a few extra TB.

I’m sure this will all sort out when the bloomfilters finish. It’s quite easy to see that my older (and thus larger) nodes are affected heavily by this issue, and younger nodes less so.

All data is in TB

I did OS updates yesterday and rebooted. Checking some logs today i see these errors which appears to be around the time the server rebooted and Storj would have been starting back up.

Summary

C:\Program Files\Storj\Storage Node\storagenode.log:8233457:2024-04-15T12:32:52-05:00 ERROR pieces failed to lazywalk space used by satellite {“error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/l
azyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:718\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86
\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233464:2024-04-15T12:33:08-05:00 ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“satelliteID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “error”: “context canceled”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233465:2024-04-15T12:33:08-05:00 ERROR pieces failed to lazywalk space used by satellite {“error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/l
azyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:718\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86
\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233467:2024-04-15T12:33:08-05:00 ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “error”: “context canceled”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233469:2024-04-15T12:33:08-05:00 ERROR pieces failed to lazywalk space used by satellite {“error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/l
azyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:718\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86
\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233471:2024-04-15T12:33:08-05:00 ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“satelliteID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “error”: “context canceled”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233473:2024-04-15T12:33:08-05:00 ERROR pieces failed to lazywalk space used by satellite {“error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/l
azyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:718\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86
\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233479:2024-04-15T12:33:08-05:00 ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“satelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “error”: “context canceled”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233481:2024-04-15T12:33:08-05:00 ERROR pieces failed to lazywalk space used by satellite {“error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:71\n\tstorj.io/storj/storagenode/pieces/l
azyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:105\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:718\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86
\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
C:\Program Files\Storj\Storage Node\storagenode.log:8233482:2024-04-15T12:33:08-05:00 ERROR piecestore:cache error getting current used space: {“error”: “filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled; filewalker: context canceled”,
“errorVerbose”: “group:\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:727\n\tstorj.io/storj/storagenode/pieces.(*CacheSe
rvice).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pi
eces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:727\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang
.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:727\n\tstorj.io/storj/storagenod
e/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/st
orj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:727\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.
func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:727\n\tstorj.i
o/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces
:69\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:74\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:727\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifec
ycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}