Filewalker problems

2024-05-01T16:52:02Z ERROR piecestore:cache error getting current used space: {“Process”: “storagenode”, “error”: “filewalker: context canceled; filewalker: context canceled; filewalker: context canceled”, “errorVerbose”: “group:\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:747\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:747\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n— filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkAndComputeSpaceUsedBySatellite:79\n\tstorj.io/storj/storagenode/pieces.(*Store).SpaceUsedTotalAndBySatellite:747\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:58\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78”}

this is the log . I dont know what is the problem

Is this lazy filewalker?

1 Like

in htop i see filewalker 25Mbits yes

The “context canceled” meaning that your disk is too slow to respond. I would also suggest to search for a FATAL errors in your logs, because it could be a consequence of a killed node.

FATA i dont have . This is the one. I must say i must delete the old db . They are damaged in old hdd

likely you do not need to do so. But maybe this could help, if you use NTFS:

However, it will not help, if use NTFS under Linux (you need to migrate to ext4 ASAP).

My hdd are all in ext4 Formate

That’s interesting. How much nodes and how much RAM? Do you use the one disk for several nodes or any kind of the RAID under the hood? How is your disk connected to the host with storagenode?

I have 2 hdd for 2xnodes with 8gb ram. The nodes using 1.2 gb ram together

I have a pi5 with 8gb ram 2xnodes 2x hdd . 1node using 1xhdd 10tb. The system using 1.1 gb ram and a load avantegt from 2

so, both drives are connected via USB2.0/USB3.0?

Usb3 4bay case from icebox with uasa

you may try to disable the lazy mode