Single node and multinode dashboards shows different values after migration to hashstore

You need to move the full structure for the hashtables, this option allows you to specify the path to that structure, as if it was on the HDD with data.
"path to store tables in. Can be same as LogsPath, as subdirectories are used (by default, it's relative to the storage directory)" default:"hashstore"

You may always use the help hints integrated to storagenode: & "C:\Program Files\Storj\Storage Node\storagenode.exe" setup --help | sls hashstore

what files do i need to move?
or do you mean i need put all maps like in data hdd?

Yes, you need to put all paths like in data HDD, hashstore can be committed, because you specify the path explicitly. Otherwise it needs to be specified as a end directory in that option.
Of course, you do not need to move log files, only hashtables.

thank you, I will think about it. it is lot of hand work

Actually, not so much, but you need to use a robocopy.exe CLI - it will create all needed paths automatically.

I found that i have this problem on about 5 nodes, also lot of nodes that already over specified amount, so it is big problem, I redused ammount of space, hope this will help. This is big problem for all users if nodes will fill HDD till the last byte

If the compaction can’t start because of lack of space, how could stopping ingress helps? It can’t remove deleted pieces. Or am I mistaken?

I lowered ammount there, where it almost full, like 40-120gb left till disk full.

I’m thinking of a simple hack.
You could lower the space to stop the ingress and delete less than 4% of the hashstore log files. It will lower your score but won’t get you DQed. I don’t know if this is enaugh though, to unlock the compaction.
I DON’T ADVISE YOU TO DO IT!
It’s just an hypothetical sugestion.

I am searching some possibilities, to move it to bigger HDD, will be good to move 4tb to bigger, and then some 3tb to 4tb. then i can retrieve bigger ingress, as i see that about 10+ hdds are full.

I guess moving some logfiles to another place and bring back after compaction would also be ok.

You could just move files from one folder to another HDD and mount the hdd to the now empty folder in windows.

2 Likes

This will be an issue for everyone who use the dedicated disk feature/option. :frowning:

Move one of the trash folder with the closest date if you have any left.
I also have a \temp folder with some files from 1-2 year ago with different file names, like “blob-1097384875.partial”.

If your node has finished conversion to hashstore, you can safely delete all files/folders inside blob folder (but not blob.folder itself)

Every hashstore install should have a 2GB file named “delete-me-in-case-of-emergency”… somewhere on disk… so you’ll always be able to start a compaction even if something went wrong and the HDD filled.

Remember how young and carefree we all were… long ago when piecestore could just delete stuff? :wink:

1 Like

I try to make junction for 1 satellite folder to other hdd, now coping files there it only 74gb, so not really big deal.

Note: If you use the dedicated disk based space reporting, 300 GB is always reserved (by default, configurable with hidden variables)

Ok i made junction in windows to other hdd, to one of the sattelite folder, node started, hope it works

Increasing “hashstore.compaction.alive-fraction” will free more space.

Compaction only seems to be triggered when a request to delete “move to trash" is received, and then only for that satellite.

It’s seems the node is unaware of the extra “wasted” space contained within the hashtables, so maybe the node needs to increase its compaction aggressiveness as it gets closer to capacity.

I have a node with 2TB stored data with close to 450GB of stored trash within the hashtables, on a fast disk system.

I added this to config file

The compaction of us1 s0 came shortly after restart. Hammered the array for 55 minutes, reduced excess trash by 170GB. 24 hours later most other hashstores had been compacted except us1/S1 - another 40GB excess trash removed

So, I increased to 0.90 and 100, when us1 S1 got it’s turn, it hammered the array for 59 minutes, touched almost every file in the us1/s1 hashstore, and reduced excess space to a few GB. It used upwards of 200GB of additional space whilst compacting.