Filewalker process see in htop?

Hi can i see the filewalker process in atop,htop od bpytop. i see my hdd aktivity and in the tools *top the node process.

just run

ps aux | grep used-

how this ist the output
pi@raspberrypinode:~ $ ps aux | grep used-
pi 1661848 0.0 0.0 6608 2224 pts/2 S+ 21:25 0:00 grep --color=auto used-

That means you have no filewalker right now

Ah ok . Do you have a special config for your node, can you post me? Or a special config for your filewalker?i have 2x10tb hdd in a usb3 case

Nothing special is needed unless you have errors in the logs related to walk and retain.

No, I don’t have these problems. I have 4 nodes, one of which has a very high read/write load but only one and I don’t know why

Perhaps it has a different disk type/filesystem or vendor, or simple has more usage :slight_smile:

filesystem ext4 hdd the same 16tb and 4tb last free

The vendor/type? At least is it CMR or SMR?

cmr all hdd´s and the pc is a raspberry pi5

Sorry, I asked about the vendor of the HDD, is it the same as the other 3? And is it the same model as other 16TB?

vendor is seagate and the hdd´s are 10,14,14,16 tb

So only a bigger drive has this issue?
Then I would assume that it has greater latency/seek time than three others, it also has free space, so it likely receives data and all this causes the lazy filewalker to work longer.
You may compare the bandwidth usage for each node to confirm the assuming that it has greater load from the network.
You may also search for the gc-filewalker process as well.

You’re right, that’s exactly how it is, only the larger one has the problem and also has half of it free. how can i find this process . In log?

By using the same command but for any filewalker, not only used-space-filewalker (we have at least three - used-space-filewalker, gc-filewalker and retain):

I would even suggest to use

ps aux | grep -E 'walker|retain"

Thought, I’m not sure about retain, it could be not a separate process.
It would be better to check your logs instead (info level is mandatory):

docker logs storagenode 2>&1 | grep -E "walker|retain" | tail