Hi!
After digging several hours trough all existing trash-cleanup-problems it seem to aim onto slow hardware (qnap nas 4 bay). The storagenode could not catch up getting rid of the trash.
One is that the storagenode native system to clean up the trash folder eats too many IOPS and your QNAP cannot catch up.
Two, because of the effect of 1, you are deleting the trash folder with rm -rf, which also is going quite slow, and the raid system is fully read-utilized.
Can you state that I understood it well?
Can you say what’s your RAID type (e.g. 5)?
beli, it’s ok many of us have been in your position. it’s a hangover from the massive amounts of test data from the summer.
Others on the forums have found that fastest day to manually delete a lot of files is to pipe it from the find command instead of using rm. Because even rm is slow:
But in general, performance wise, the disk array is suffeering under the weight of millions of tiny files. General things that can boost performance are:
set up a SSD caching layer for metadata (ZFS, LVM, and synology-btrfs can do this)
don’t use an array, use one node per disk (after your node shrinks from the deletes you might be able to juggle things)
more RAM for the system might help
there are some minor file system tweaks to ext4 and ntfs and zfs that can be made
use the badger cache feature for storj (this helps wilh the used space filewalker on startup, the reads are kind of slow.
Another note is that the maximum node size is officially 24TB. larger nodes can’t efficiently find garbage in the garbage collection process.
Ah, I think I’m slowly starting to understand how things are going.
Alright, the trash gets emptied once, and then everything should run as usual.
I don’t intend to turn the NAS server into a moon rocket
At the moment, my hobby time is still going into the smart home.
Thanks to everyone!
It likely deletes them, but very slowly because it’s a slow operation on your setup.
The databases should be updated when you restart the node (after deletion would be finished).
If you wouldn’t have any errors related to a filewalker and/or databases, it should update the usage in the databases and on your dashboard.