Trash does not go away in 7 days

I think the trash not being updated is related to the new lazy filewalker for trash emptying. I reverted a node back to non-lazy and after emptying trash the trash size went down.

1 Like

@Ambifacient: I’m assuming you restarted your node to make this setting take effect? Do you have piece-scan-on-startup enabled or disabled?

Yes I just change the config and restart. I have piece-scan-on-startup disabled.

1 Like

Thanks, I’m now using the following two settings:
pieces.enable-lazy-filewalker: false
storage2.piece-scan-on-startup: false

Now I just need to wait about a week for the node to delete some trash and see if it correctly auto updates the space used for trash.

Noob question… what’s the linux command for checking trash dir size, to compare it with the dashboard?
My path is: /volume1/Storj/storage/trash

du --si -s /volume1/Storj/storage/trash
3 Likes

Yes! That is the plan, anyway. If you have a failure mode other than “nested date directories”, “2-character directories without a date directory”, or “date directories aren’t being cleaned up”, then maybe we don’t know about it and it might not be corrected automatically. Those are the ones that come to mind immediately, anyway.

I can see why it would seem that way. Be assured that the branding work did not slow down storagenode progress at all. These are separate teams.

We’re working on quite a few long-needed storagenode performance improvements right now, and I think y’all will be happy with them. (Well, some of you. I’m sure there will also be new things to complain about for those who prefer that.)

6 Likes

I haven’t downgraded any of my nodes.

Unfortunately, it could happen automatically, if you would stop and remove the container, then run it again, if we stopped a rollout for some reasons, your docker node will get the minimum required version, thus - a downgrade may happen.
We have stopped rollout several times in the last weeks.

As I have already said: I’m running baremetal linux. No storagenode-updater, no docker. The nodes only restart if I restart them, and they never get downgraded.

Yes, I remember that, this is mostly for other readers. Sorry for the noise.

Does it normaly took hours to complete for 1.7TB?
It shows no response after 4 hours.
I have 18GB RAM and Exos drive on SATA, but 2 nodes are running on the machine. I know that both use the entire RAM for cache and buffers, but still… the retain took like 30h and scanned the entire 12TB of data + moving the pieces.

How can I run the command in the background and export the result to a file in Storj dir?
I access that node through Wireguard and putty, but I want to disconnect and use my PC for other stuff…
Thanks!

Depends. But it’s an indication, how long it would take for storagenode too.
So, if it’s slow for the system call like du, it will be slow for Storagenode as well.

for what? storagenode cannot consume it.

To get that result in a file, to not have to stay connected with putty and wireguard to it. It’s not for storagenode, it’s for checking if the size displayed on the dashboard is the same as found with du.

Oh, for a comparison reasons, I see.
You always can use the

you know?

Use the command

screen

https://www.google.pl/search?q=how+to+use+screen+command+in+linux&sca_esv=f73e6ca5e1446b75&sca_upv=1&sxsrf=ADLYWIIeHPzMbMuSebA_6sWvpZqO8nPnoA%3A1714913419441&source=hp&ei=i4A3ZtvCGP2ywPAPk9-dyAU&iflsig=AL9hbdgAAAAAZjeOm7md7OYwaCb7CywYglvFKD3BSFXu&udm=&oq=how+use+screen+c&gs_lp=Egdnd3Mtd2l6IhBob3cgdXNlIHNjcmVlbiBjKgIIADIGEAAYFhgeMgYQABgWGB4yBhAAGBYYHjIGEAAYFhgeMgYQABgWGB4yBhAAGBYYHjIGEAAYFhgeMgYQABgWGB4yBhAAGBYYHjIGEAAYFhgeSJZoUABY-klwAHgAkAEAmAHrCKABsCGqAQ8yLjkuMS4yLjAuMS4wLjG4AQHIAQD4AQGYAhCgAr0nwgIMECMYgAQYExgnGIoFwgIKECMYgAQYJxiKBcICDhAuGIAEGMcBGI4FGK8BwgIREC4YgAQYsQMY0QMYgwEYxwHCAg4QLhiABBixAxjRAxjHAcICCBAAGIAEGLEDwgIFEAAYgATCAgQQIxgnwgILEAAYgAQYsQMYgwHCAgsQLhiABBixAxiDAcICDhAAGIAEGLEDGIMBGIoFwgIREC4YgAQYsQMYgwEY1AIYigXCAgQQABgDwgIOEC4YgAQYsQMYgwEYigXCAhEQLhiABBixAxiDARjHARivAcICBRAuGIAEwgIIEAAYgAQYywHCAgcQABiABBgKwgIIEAAYFhgeGA-YAwCSBw8wLjYuNi4xLjAuMi4wLjGgB5eEAQ&sclient=gws-wiz

and

du --si -s /volume1/Storj/storage/trash > duresult.txt
2 Likes

Not tested but something like this should work:

nohup du --si -s /volume1/Storj/storage/trash > path/to/output_file.txt >&1 &
2 Likes

It does not. The filewalker has just removed trash.

 lazyfilewalker.trash-cleanup-filewalker.subprocess      trash-filewalker completed      {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "bytesDeleted": 85278329148, "numKeysDeleted": 572557}

du gives me 21 GB for trash.
But node is telling me 433 GB of trash.

So nodes displaying garbage again.
And the best thing is, it is blocking new ingress because the node believes it is full.

Version is 1.102.3.

Cleaned up your url from tracking and removed localization: https://www.google.com/search?q=how+to+use+screen+command+in+linux

I would suggest a more user friendly tmux as an alternative.

3 Likes

When I saw that huge link, I didn’t click on it. Yours looks more attractive. :smiling_face_with_three_hearts:
This is what I was looking for for a long time, because all my machines are accessed remotely.
I will take a look at tmux too. Thanks!

1 Like