beli, it’s ok many of us have been in your position. it’s a hangover from the massive amounts of test data from the summer.
Others on the forums have found that fastest day to manually delete a lot of files is to pipe it from the find command instead of using rm. Because even rm is slow:
But in general, performance wise, the disk array is suffeering under the weight of millions of tiny files. General things that can boost performance are:
- set up a SSD caching layer for metadata (ZFS, LVM, and synology-btrfs can do this)
- don’t use an array, use one node per disk (after your node shrinks from the deletes you might be able to juggle things)
- more RAM for the system might help
- there are some minor file system tweaks to ext4 and ntfs and zfs that can be made
- use the badger cache feature for storj (this helps wilh the used space filewalker on startup, the reads are kind of slow.
Another note is that the maximum node size is officially 24TB. larger nodes can’t efficiently find garbage in the garbage collection process.