I think the trash not being updated is related to the new lazy filewalker for trash emptying. I reverted a node back to non-lazy and after emptying trash the trash size went down.
@Ambifacient: Iâm assuming you restarted your node to make this setting take effect? Do you have piece-scan-on-startup enabled or disabled?
Yes I just change the config and restart. I have piece-scan-on-startup
disabled.
Thanks, Iâm now using the following two settings:
pieces.enable-lazy-filewalker: false
storage2.piece-scan-on-startup: false
Now I just need to wait about a week for the node to delete some trash and see if it correctly auto updates the space used for trash.
Noob question⌠whatâs the linux command for checking trash dir size, to compare it with the dashboard?
My path is: /volume1/Storj/storage/trash
du --si -s /volume1/Storj/storage/trash
Yes! That is the plan, anyway. If you have a failure mode other than ânested date directoriesâ, â2-character directories without a date directoryâ, or âdate directories arenât being cleaned upâ, then maybe we donât know about it and it might not be corrected automatically. Those are the ones that come to mind immediately, anyway.
I can see why it would seem that way. Be assured that the branding work did not slow down storagenode progress at all. These are separate teams.
Weâre working on quite a few long-needed storagenode performance improvements right now, and I think yâall will be happy with them. (Well, some of you. Iâm sure there will also be new things to complain about for those who prefer that.)
Unfortunately, it could happen automatically, if you would stop and remove the container, then run it again, if we stopped a rollout for some reasons, your docker node will get the minimum required version, thus - a downgrade may happen.
We have stopped rollout several times in the last weeks.
As I have already said: Iâm running baremetal linux. No storagenode-updater, no docker. The nodes only restart if I restart them, and they never get downgraded.
Yes, I remember that, this is mostly for other readers. Sorry for the noise.
Does it normaly took hours to complete for 1.7TB?
It shows no response after 4 hours.
I have 18GB RAM and Exos drive on SATA, but 2 nodes are running on the machine. I know that both use the entire RAM for cache and buffers, but still⌠the retain took like 30h and scanned the entire 12TB of data + moving the pieces.
How can I run the command in the background and export the result to a file in Storj dir?
I access that node through Wireguard and putty, but I want to disconnect and use my PC for other stuffâŚ
Thanks!
Depends. But itâs an indication, how long it would take for storagenode too.
So, if itâs slow for the system call like du
, it will be slow for Storagenode as well.
for what? storagenode cannot consume it.
To get that result in a file, to not have to stay connected with putty and wireguard to it. Itâs not for storagenode, itâs for checking if the size displayed on the dashboard is the same as found with du.
Oh, for a comparison reasons, I see.
You always can use the
you know?
Not tested but something like this should work:
nohup du --si -s /volume1/Storj/storage/trash > path/to/output_file.txt >&1 &
It does not. The filewalker has just removed trash.
lazyfilewalker.trash-cleanup-filewalker.subprocess trash-filewalker completed {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "bytesDeleted": 85278329148, "numKeysDeleted": 572557}
du
gives me 21 GB for trash.
But node is telling me 433 GB of trash.
So nodes displaying garbage again.
And the best thing is, it is blocking new ingress because the node believes it is full.
Version is 1.102.3.
Cleaned up your url from tracking and removed localization: https://www.google.com/search?q=how+to+use+screen+command+in+linux
I would suggest a more user friendly tmux as an alternative.
When I saw that huge link, I didnât click on it. Yours looks more attractive.
This is what I was looking for for a long time, because all my machines are accessed remotely.
I will take a look at tmux too. Thanks!