You need to move the full structure for the hashtables, this option allows you to specify the path to that structure, as if it was on the HDD with data. "path to store tables in. Can be same as LogsPath, as subdirectories are used (by default, it's relative to the storage directory)" default:"hashstore"
You may always use the help hints integrated to storagenode: & "C:\Program Files\Storj\Storage Node\storagenode.exe" setup --help | sls hashstore
Yes, you need to put all paths like in data HDD, hashstore can be committed, because you specify the path explicitly. Otherwise it needs to be specified as a end directory in that option.
Of course, you do not need to move log files, only hashtables.
I found that i have this problem on about 5 nodes, also lot of nodes that already over specified amount, so it is big problem, I redused ammount of space, hope this will help. This is big problem for all users if nodes will fill HDD till the last byte
I’m thinking of a simple hack.
You could lower the space to stop the ingress and delete less than 4% of the hashstore log files. It will lower your score but won’t get you DQed. I don’t know if this is enaugh though, to unlock the compaction.
I DON’T ADVISE YOU TO DO IT!
It’s just an hypothetical sugestion.
I am searching some possibilities, to move it to bigger HDD, will be good to move 4tb to bigger, and then some 3tb to 4tb. then i can retrieve bigger ingress, as i see that about 10+ hdds are full.
Move one of the trash folder with the closest date if you have any left.
I also have a \temp folder with some files from 1-2 year ago with different file names, like “blob-1097384875.partial”.
Every hashstore install should have a 2GB file named “delete-me-in-case-of-emergency”… somewhere on disk… so you’ll always be able to start a compaction even if something went wrong and the HDD filled.
Remember how young and carefree we all were… long ago when piecestore could just delete stuff?
Increasing “hashstore.compaction.alive-fraction” will free more space.
Compaction only seems to be triggered when a request to delete “move to trash" is received, and then only for that satellite.
It’s seems the node is unaware of the extra “wasted” space contained within the hashtables, so maybe the node needs to increase its compaction aggressiveness as it gets closer to capacity.
I have a node with 2TB stored data with close to 450GB of stored trash within the hashtables, on a fast disk system.
I added this to config file
The compaction of us1 s0 came shortly after restart. Hammered the array for 55 minutes, reduced excess trash by 170GB. 24 hours later most other hashstores had been compacted except us1/S1 - another 40GB excess trash removed
So, I increased to 0.90 and 100, when us1 S1 got it’s turn, it hammered the array for 59 minutes, touched almost every file in the us1/s1 hashstore, and reduced excess space to a few GB. It used upwards of 200GB of additional space whilst compacting.