How much files left in blobs folder?
The Memtable usage of one Node is now 2 GB RAM from 1.3 GB. Interestingly, it keeps climbing. Probably after each compaction.
Just check the files themselves. They are often broken 0 kb files. I wouldnāt bother too much.
find /storj/3/storage/storage/blobs -type f -size 0 -delete
No.
⯠du -hs /storj/3/storage/storage/blobs/
9.8M /storj/3/storage/storage/blobs/
⯠find /storj/3/storage/storage/blobs/ -type f | wc -l
32
Ok, deleted size 0 files. Letās see how it goes.
Do you have any remaining files? It seems itās just empty directions based on a puny size. Itās probably done migrating. You can turn of migreate_chor and/or delete empty directories to avoid metadata fetch spikes every ten minutes.
This is a well known bug. Stop node, delete used_space_per_prefix.db, start node again.
This fixed the used space in dashboard.
Also disabled the migrate_chore (in addition to deleting the 0 size files and empty directories in blobs) and am not getting any more errors.
Thanks everyone!
Hold your horses! Not so fast! Whereās @alpharabbit 's beer? ![]()
⦠and oursā¦
If the blobs folder is just filled with empty directories - are we okay to delete those after migration finished?
Delete everything in blobs is ok. But donāt delete blobs.
Yes. Itās in the wiki above.
thanks, in my defence, reading is hard ![]()
If you are on Synology, running nodes after sudo su in Docker, you activate the migration, you want to restart the node and you get this error:
Unable to find image '12--stop-timeout:latest' locally
docker: Error response from daemon: pull access denied for 12--stop-timeout, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
just exit and sudo su again. You were in the wrong directory. ![]()
(I realised it after pulling the image 3 times
)
I would still recommend to stop and remove the container before applying any changes to the files on the bind filesystem, because docker may revert your changes. And it will revert with probability 90%, if you would stop the container but not remove it before changes, see
Phew, 1 month ago I started conversion of my nodes, and this morning all was finished. ~100 nodes with ~100TB active data has completed, and Iām now fully on Hash Store for all things StorJ. Very Cool
You have 1TB/node?
I have 18 nodes with 110TB. You do it wrong. ![]()
Or you say you have 100 nodes with 100TB on each?
He has 100T of disk used.