Something is happening. I hope the collected trash will be deleted in the upcoming 2 weeks.
OMG! I don’t see trash over 500GB on any of my nodes. How that happened?
Trash filewalker could never finish because of a file system error, fsck fixed it.
Im now having a new “issue”, the node seem to delete data, my drive is now 81% filled instead of 100%, but my dashboard/node is now showing, it has stored more than the drive could even hold, causing no ingress even though my drive could save data again now, since the node has freed up some space. Is there a way to fix this instead of waiting? I might miss out almost a month of ingress.

The correct answer is probably to let the used space file walker run and sort it out, but on a large node that could take a long time if it even works correctly. You might have to apply a work around to side step a bug in one of the databases.
It’s not recommended but sometimes I use a command to tell my node that I don’t have any trash to allow more ingress.
from inside the database folder:
sudo sqlite3 piece_spaced_used.db "update piece_space_used set total = '0' where satellite_id = 'trashtotal'";
restart the node after.
If I want to run the file walker for real, I first do:
sudo sqlite3 used_space_per_prefix.db "DELETE FROM used_space_per_prefix";
And then restart the node.
Or else the file walker will retrieve outdated and incorrect numbers instead of actually walk the files(until they fix the bug)
Requires that sqlite3 be installed
In your case it looks like the used space is wrong.
Here is a link to the work around instructions to be able to run the file walker if it’s reporting the incorrect number: High Trash usage - 27 TB - no more uploads - #29 by Alexey
Those instructions basically do the same thing as the second sqlite3 command I posted. It’s just a different way of getting rid of the problematic data in the used_space_per_prefix.db file.
Thank you! Looks like I get Ingress again. Hopefully the other trash will be still collected and deleted, because it still shows 10TB Usage, even though its less (rest is trash).
I think the trash is already collected and now you just have discrepancies in your used space database.
You need a file walker to complete successfully to fix those discrepancies.
Do you know if your file walker is running right now?
I cannot tell unfortunately. In my logs I just see up- and downloads. No information how far the filewalker is through scanning. At the beginning it most says somethin like “filewalker started” or something. At least I got around 19% freed up within 9 days. I don’t know why it wasn’t deleting it correctly in first place.
Well I just remembered that a couple days ago the trash was restored: No bloom filters from US1 - #59 by alpharabbit
Hopefully your 10GB of trash did not go back into your blobs folder after you have been trying to get rid of it for over a month.
You can check the size of your trash folder to know if the trash is actually in there or not.
I don’t know how your logs are setup but here’s an example of my walker logs when I grep for walker
tail -n 1000000 node.log | grep walker
If the duration says it completed in milliseconds (underlined in red in the pic), it was a false run and you need to apply the file walker work around to make it do a real walk. A real walk will probably take minutes or hours depending on the size and speed of your node.
If it says started and there’s no completed, than it’s probably still running and just needs more time to complete.
Edit: There’s only one satellite on my node but on yours it will need to say complete for each of the satellites you use for the walker to be fully complete.
My logs are set to “WARN” only, so I think I have to change ist. My Node logs filled my system SSD so much, I either capped the size via docker compose stack, or set it to “WARN” only.
FWIW you might want to use selective logging. I keep at INFO but exclude some types that generate bulk of noise I don’t (usually) need.
# custom level overrides for specific loggers in the format NAME1=ERROR,NAME2=WARN,... Only level increment is supported, and only for selected loggers!
# log.custom-level: ""
log.custom-level: piecestore=FATAL,collector=FATAL
# the minimum log level to log
log.level: info
You can also use the iostat and fatrace commands (if installed) to get an idea if something is going on and possibly what.
In this pic iostat shows that the drive is being 97% utilized (a filewalker is running in this case).
fatrace shows that the file walker just finished scanning the BB folder and started scanning the BC folder on the EU1 satellite. If a bloom filter is running, it will show files being moved to the trash folder. 7 days later it will show files in the trash folder being deleted.
Obtained these tools from:
sudo apt install sysstat
sudo apt install fatrace
You may also use a debug port to get run processes /mon/ps
on a debug port: