Thanks, I was approaching it a bit too complicated.
Then I set all values to true and remove the container, then recreate it. I set the log from warn to info and wait.
Thanks, I was approaching it a bit too complicated.
Then I set all values to true and remove the container, then recreate it. I set the log from warn to info and wait.
The reverse. Stop and remove the container, then set all to true and re-create.
For the docker compose:
docker compose downdocker compose up -dIf you switch p2 with p1 then there is a non-zero probability that docker will revert your changes (because it controls an overlay filesystem and it’s not expecting external changes to it).
Thanks! I started it on my 10TB node, whose hastable folder size was already 2.3TB.
Active migration has been running for days, but the progress is about 100GB/day.
I tried reducing the free space to zero so that there is no new incoming data to gain I/O, but it didn’t help.
Is there any trick to speed it up, because this will take a month.
Exos X16, ext4
The speed it limited by your disks seek latency. There is absolutely nothing you can do now to improve it, that you would have not already done otherwise, like ensuring metadata fits in memory and is cached.
On the other hand — what’s the hurry? What difference does it make whether it takes a month or a year?
The disk was at 100% load for a week. I thought if there was any way to optimize it, it wouldn’t crash due to the high load for a long time.
Enterprise disks are designed for relentless sustained load for 5+ years. This load is normal for your disk.
As alluded above, you can speed things up by ensuring you have enough free ram to fit all metadata.
Make sure you have adequate cooling to keep reported temperature at or below 60°C.
No worries. I migrated 10 nodes, each taking a month or more. Just make sure you have an UPS that can safely shut down the system. That’s the only cause of concern.
I have the same drives as you, Exos X16, X18, X22, X24, all on ext4. You have the most reliable setup. For 10TB you will be looking at 40days or so of migration, with a resonable amount of RAM (>8GB).
I migrated also Ironwolfes on 8GB RAM. They woked at 100% even on 1GB RAM for a long time. That’s not a problem for HDDs.
Hi guys, how can I know if my node is already using hashstore or still in the migration progress?
when your piecestore will be our or pieces
what?? 20 charssssssssss
I see a “hashstore” inside my storage dir. Is that a good sign?
yes, i think today all people has this directory, do you perform full migrate?
if you migrate actively aal satellites, then blobs directory should be 0 size or near this, then migration is over.
It’s not entirely public, how far along in the hash store conversion we are.
If you’ve not done anything yourself, there have been three stages:
To my knowledge, all nodes are on passive migration on all satellites at the moment, but this is not confirmed.
If you want to check, just measure the size of your blobs folder
You can compare the prometheus metrics used_space and hashstore{field=LenLogs}.
Would be nice and simple, if such a detail was showed on dashboard.
No, it would cause much more confusion than it would clear up. Users who are interested in the innards of the magical machine will open it up and look inside to get a better understanding - having a window only raises more questions.
Totally agree. Not many people who how to mess with the terminal.
Conversion to hashstore is almost done. So adding any progress indicator now would be just a waste of resources.