Disk usage discrepancy?

I can confirm that I am still seeing a big difference between the average used and the actual used.
However I need to wait until all issues with deleting and used filewalkers have been sorted out so that I have actual and reliable figures on the dashboard again.
But as for the figures now I would still have to get 30% of the stored data trashed to get used space in the area of average used.
So I am not convinced yet that this game is already over.

1 Like

no. I did think that was the issue, filewalker not completing but I’ve watched it carefully twice now and its completed fine. Started watching it a couple of months ago.

I run 3 nodes, similar size and the other two have only a few hundred G difference. Which I can live with. One was out by over a T but it fixed itself up and few weeks back. This one is going the other way.

Just check if the dbs are OK and not corrupted.

1 Like

I don’t have any DB errors or any errors in the logs, and the databases run on a separate disk SSD which doesn’t show any disk errors. But ok, integrity check.

Please also check that the garbage collector filewalker and retain have had finished their work for each trusted satellite:

Please also check that you deleted the shutdown satellites’ data:

1 Like

ok, I see this problem has been outstanding for over a year, so known issue with no definite fix.

The fix is determined. You need to have all filewalkers to complete successfully. If they are not - you need to fix your setup, unfortunately.

so just a filewalker issue over all this time?

I do not know, you didn’t provide any logs so far.

Fair enough. I’ve pasted below, the filewalker process is still running burning disk and cpu. No other errors.

2024-05-23T10:11:19Z ERROR pieces failed to lazywalk space used by satellite {“Process”: “storagenode”, “error”: “lazyfilewalker: signal: killed”, “errorVerbose”: “lazyfilewERROR pieces failed to lazywalk space used by satellitealker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:704\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
2024-05-23T10:11:19Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“Process”: “storagenode”, “satelliteID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “error”: “context canceled”}
2024-05-23T10:11:19Z ERROR pieces failed to lazywalk space used by satellite {“Process”: “storagenode”, “error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:704\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2024-05-23T10:11:19Z ERROR lazyfilewalker.used-space-filewalker failed to start subprocess {“Process”: “storagenode”, “satelliteID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “error”: “context canceled”}
2024-05-23T10:11:19Z ERROR pieces failed to lazywalk space used by satellite {“Process”: “storagenode”, “error”: “lazyfilewalker: context canceled”, “errorVerbose”: “lazyfilewalker: context canceled\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:73\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkAndComputeSpaceUsedBySatellite:130\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:704\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
2024-05-23T10:11:19Z ERROR piecestore:cache error getting current used space: {“Process”: “storagenode”, “error”: “filewalker: context canceled; filewalker: context canceled; filewalker: context canceled”, “errorVerbose”: “group:\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:713\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:713\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:713\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

You have slow disks. Have you checked if they are SMR or CMR ?

1 Like

These all are errors AFTER your node is killed. Please search before this killing message what was the culprit.

1 Like

apart from the normal ERROR piecestore upload or download there are no other error messages.

There are a few WARN on old satellite ID “Satellite ID”: “118UWpMCHzs6CvSgWd9BfFVjw5K9pZbJjkfZJexMtSkmKxvvAW” which I’ve never been able to get rid of.

1 WARN on node score.

And the other filewalker messages which you have seen.

This one has been shutdown a several years ago, you may ignore it.

I haven’t. Please provide the latest “finished” lines from your logs for each filewalker and each trusted satellite.

The free space of the hard drive is lower than that of the dashboard. Do I have to worry?

See also

The free space of the hard drive is lower than that of the dashboard.

I’ve looked at a few things:

The file system is ntfs, the cluster size is 4096, the Blobs folder has four folders, the hard drive is 1% fragmented, I have to check the registry, I will check the registry this afternoon or tomorrow.

I have an i9 10850k processor, 64Gb of ram, 300Mb symmetrical internet, I have Windows 11 professional. The computer is used for a storj node and more things.

The hard drive has been at 100% use for 74 hours. Now the hard drive is still at 100% usage. 74 hours ago I stopped the node and changed the size, other nodes filled up and I wanted this node to start filling up.

In the registry I found this:

2024-06-14T00:38:47+02:00 WARN piecestore:monitor Disk space is less than requested. Allocated space is {“bytes”: 4643038065707}

I moved database to ssd.

You should add following line to config.yaml

storage2.piece-scan-on-startup: true

then restart the node, let it run for about 3 4 days then check the dashboard again.

1 Like

Hi i have a problem with storj. My harddisk is full. Storj uses there 17TB but in the node it shows that it uses only 9 TB , 4TB are free and 68GB are in the trash. There seems to be data on the disk which is no longer calucaleted in the node but uses disk space. Is there a way to clean it correctly?
In the past had this problem https://forum.storj.io/t/error-migrating-tables-for-database-on-storagenode-migrate-v57/26205. I have deleted the bandwidth.db include file(s) with .db-shm and .db-wal extensions.