I reported this problem to the team, I didn’t see such a error for audits.
Where is this cache located?
I’m agree. So for the further readers - please use a “Reply” button here and change to answer as a separate topic.
I reported this problem to the team, I didn’t see such a error for audits.
Where is this cache located?
I’m agree. So for the further readers - please use a “Reply” button here and change to answer as a separate topic.
Meaning, you already had a low online score. Something that probably hasn’t to do with the badger cache at all.
Exactly, and you still forgot the micro-SD or any other storage. I bought some time ago a N100 with 16G RAM, 512G SSD for about $120. That’s much more performance, for a penny more so to say. I would recommend AliExpress for it.
RPi’s were really great for many projects in the beginning, but in mean time I switched them all to mini-PCs for different reasons.
Maybe you can find some tips here:
https://forum.storj.io/t/new-storagenode-build-intel-n100/24861?u=snorkel
The N100 with 32GB RAM is overkill for storagenodes. You can find NAS type cases from many brands.
If you go building something new for Storj, take note that you need as much RAM as you can and SATA ports. These 2 are the main requirements. The third would be power draw.
Default location in the storage folder on a spinning disk. This drive has had bad sectors reallocated in the past. Maybe the cache is corrupted? Perhaps I’ll move the cache to another disk.
If you can, we should discard first that does not cause that.
Thank you for all the digging and reporting.
If this cache is enabled, will the disk load decrease when the garbage collector is running?
Hello @baliqci,
Welcome back!
Depends on your setup, but it might be, except the first scan.
I noticed increased disk loan when using the used space filewalker. garbage collection… maybe higher but not dramatically higher?
I took a closer look at my logs and see that the retain process takes 7h each time it runs for 4TB node on 32GB RAM Ubuntu machine; it’s like the startup filewalker runs each time retain runs… which makes sense, because in order to move the pieces to trash, it needs to check all the pieces against the bloom filter.
I was thinking of enabling the badger, too.
I get that the badger cache will be dropped once the new hashstore is fully implemented?
So, we should not bother with activating the badger anymore?
It wouldn’t be dropped, but the result would not change so significantly for hashstore as for the default piecestore. There is still a benefit even for the hashstore backend, but around a measure error.
And there is no plans to make a hashstore the default backend, you must it enable separately, see:
What happens with this cache? it’s secure to enable finally?
You have your answear just above your post…
Hi, lately I have this error when I try to find the api key for Multinode Dashboard
# docker exec -it storagenode /app/bin/storagenode issue-apikey --config-dir /app/config --identity-dir /app/identity
2024-12-26T16:40:26Z INFO Configuration loaded {"Process": "storagenode", "Location": "/app/config/config.yaml"}
2024-12-26T16:40:27Z INFO Anonymized tracing enabled {"Process": "storagenode"}
2024-12-26T16:40:27Z INFO Identity loaded. {"Process": "storagenode", "Node ID": "1Kf6y9Mxxxxxxxxxxxxxxxxxxxxxxxxx"}
Error: Error starting master database on storage node: Cannot acquire directory lock on "/app/config/storage/filestatcache". Another process is using this Badger database. error: resource temporarily unavailable
It seems like a Badger cache error. What do you recommend?
You need to disable a badger cache when you invoke the second instance:
# docker exec -it storagenode /app/bin/storagenode issue-apikey --config-dir /app/config --identity-dir /app/identity --pieces.file-stat-cache=""
See
In theory, if I have badger cache enabled and decide to switch it off by removing the parameter in the run command, turn also startup piecescan off, but not deleting badger’s directory, what happens when I enable badger again after some time?
It would see old and wrong data, so the badger dir must be deleted first, than badger reenabled?
I could be mistaken but I believe the old badger cache would be incomplete but not wrong. When the node needs data about a stored file, it checks the cache first. If the data is not in the cache the node will get it from the file system instead.
For testing purposes, i enabled badger cache, and the whole disk feature on my node, and now the dashboard shows it’s offline. UPTIME 0m, VERSION v, and blank node id. Is this normal ? Besides that it seems to operate normally, im getting traffic, and the logs only contains a few ERROR piecestore upload failed EOF.
1.120 relised and updating nodes, this is the END of badger cache.
Why? For those who use piecestore it will no longer be necessary?
While the work on hashstore is impressive, it doesn’t seem as robust as piecestore
I activated badger cache on all nodes and I will stay away from hashstore untill it matures a little bit more. I estimate 6 months would be enough to make it optimal. Untill than, piecestore it’s reliable and with the help of badger cache, it’s fast enough.