I tested on storj-up, it’s working.
Could you please check, that the dashboard port is not occupied?
Where can i find information on / when / why / how should I enable this ?
Is there paritcular cirucmstance it should be enabled vs not enabled?
If you have low RAM and the file walkers struggle aka they take to much time to finish, like hours…
It also helps to win more races… in theory.
https://forum.storj.io/t/badger-cache-filewalker-test-results/27334?u=snorkel
There is also the official post, but I can’t find it.
To enable it:
--pieces.enable-lazy-filewalker=false \
--storage2.piece-scan-on-startup=true \
--pieces.file-stat-cache=badger
There is also a post about moving the badger cache dir to SSD, if you want.
I run all the storagenode files on the same HDD as the storage on 8GB RAM with 2 nodes, and I don’t have any problems (data, databases, badger cache).
But you need UPS, because a sudden power loss can corrupt the badger cache and you must manualy delete the directory, restart the node and run the startup filewalker as there was no cache (time wise), to rebuild the cache.
is it possible like DB’s , to have these new pieces coming from the SSD drive ?
You can direct the badger directory like db-es wherever you want. I will look for a command semple if you wait a little.
--mount type=bind,source="/volume1/Storj/Identity/storagenode/",destination=/app/identity \
--mount type=bind,source="/volume1/Storj/",destination=/app/config \
--mount type=bind,source="/volumeUSB1/usbshare/storjdbs1/filestatcache/",destination=/app/config/storage/filestatcache \
--mount type=bind,source="/volumeUSB1/usbshare/storjdbs1/",destination=/app/dbs \
--name storagenode storjlabs/storagenode:latest \
--storage2.database-dir=dbs \
if you’re using docker compose it’s something like this (the source is the SSD you want it to be)
volumes:
- type: bind
source: /home/ubuntu/storj4b/badger
target: /app/config/storage/filestatcache
do you happen to know if hashtables are anything to do with badger?
they are different things. The hashtable is mandatory, the badger is optional.
The hashtable, which I don’t fully understand, isn’t fully implemented yet. Eventually the badger cache might be made obsolete though?
Hashtables are not mandatory. It is still opt-in.
True but just so everyone is on the same page.
It’s still under considerations. It’s used for Storj Select primarily, because it’s fast and waste less space, also GC and trash collector works faster without influence the speed. It also uses less resources if you compare with the default piecestore+badger, but the resulting speed is much better and it can handle much more high load.
I believe it’s too early to make it mandatory. More testing in necessary. Just these days it gave errors on a NFS mount to someone.
I’ve fully migrated to a hash table structure. The memory optimization is substantial. Before the switch, I was utilizing approximately 2TB of storage, with a wasteful 400GB overhead due to the use of 32KB clusters to accommodate my large hard drive. Additionally, the file count was astronomical at 16 million. Following a two-day migration process (2TB SSD Defer-Write Cache), the file count has been drastically reduced to fewer than 2500. Consequently, the file system is now noticeably!!! faster and much more efficient.
Even if it’s not supported, I believe that it’s possible to configure it for locks too. I was able to do so for SMB, which is not supported too.
where are the instructions for how to try the hash migration?
my disks are using ZFS compression so inefficiency from cluster size is not a problem… but reducing the file count by millions would probably be huge performance boon for housekeeping operations.
I even have a tiny 500GB node I could sacrifice to the test…
In the hashstore thread…
https://forum.storj.io/t/tech-preview-hashstore-backend-for-storage-nodes/28724?u=snorkel
Take note that it’s a one way ticket. You can’t revert back to piecestore.
Not with this attitude
The format is not so complex that writing a Python script, maybe even a bash script, to chop log files back into old piece files would be impossible.
I ment for regular mortals, like the rest of us.