Hashstore Migration Guide

It may mean that you might not enabled all options.
They all should be true. Please also note, if you use docker, you need not only stop the container, but also remove it, otherwise some changes may be reverted by docker.

1 Like

What do you mean by remove it? Would docker compose stop then up not be enough?

yes, you need to call docker compose down.
Docker uses an overlay filesystem, it’s designed to keep layers, so it may revert your changes, while the container is just stopped, because the current layer doesn’t know about your changes, since the container is stopped.
So, it’s highly recommended to also remove the container before changes in the bind filesystem to do not have surprises.
In the case of docker compose it’s docker compose down.

1 Like

Does hashstore data occupies less than piecestore data?
I’m thinking yes, because there are just log entries, not individual files, but maybe I’m mistaken.

1 Like

Ich habe ein Raspberry Pi mit einer HDD als Store Note macht das Sinn dass ich umstelle ich nutze docker als Shell gibt es dafĂŒr eine Anleitung

Höchstwahrscheinlich ja. Oder Sie warten, bis wir Ihren Knoten im Laufe der Zeit umstellen.
Wenn Sie Ihre Daten vorab migrieren möchten, können Sie diese Anleitung verwenden:

Danke . Habe festgestellt das ich den Hashstore ordner schon ĂŒberall habe und ich denke somit automatisch es schon gemacht wurde.

I’m curious on the new Memtable.

Should I use the following in the config-file?:

hashstore.table-default-kind: memtbl

or is it: MemTbl ? For some reason it doesn’t pick up memtbl. At least the higher RAM usage is not seen in the processes and htop.

how long do you use it? it will convert only on next compaction to memtbl or back, it is not instant process.

2 Likes

So “hashstore.table-default-kind: memtbl” is correct and I would just need to wait some hours?

yes shold be this one
Also you need 1.3+ gb ram per used space tb

1 Like

Does the entire hashtable (under meta) need to be rewritten if I switch from memtbl to hashtbl?

What happens if the node is currently performing a compaction and is terminated (e.g., a host system reboot or an update via storj-updater)? Does the compaction restart or continue where it left off?

Or days. It converted on the next compaction.

It should continue with remained logs, which were selected for compaction. But I do not think that the same ones which were on start.

Yeah it is active now on one node. But it takes only 1.4 GB of RAM, not 5.5 * 1.3 = 7.15 GB. Hashtable nodes take 0.2 - 0.3 GB of RAM.

I have a few questions:

  1. memtbl vs the other kind - is it possible to switch between them or do I need to choose one and commit to it from the beginning?
  2. Avoiding compaction as much as possible - do I need to set hastore.compaction.alive-fraction to some small number like 0.1? Would having more “partial” files screw up the memtbl or something (that is, something other than just having lots of files in the filesystem?).
  3. Is there advantage/disadvantage in having larger or smaller files (hashstore.compaction.max-log-size)?

My setup is a bit complicated,“allocate the file and then slowly fill it up by appending” will result in a fragmented file. Compaction may or may not make it contiguous. I also want to avoid moving data around (that is copying from one file to another, then deleting the original).

You can, however the switch will happen only on compaction.
You can reduce the frequency, if you have a lot of free space and do not mind to keep a garbage longer.

It’s not expected.

I remember reading that when the customer deletes a piece, the relevant portion of the storage file gets hole punched. Did that change or does this happen only during compaction?

No hole punching in the current implementation. And if there was one, it would happen at garbage collection/bloom filter time, or at expired piece collection time, not right after deleting a piece.

1 Like

I migrated one of my nodes. Almost everything was migrated successfully, but I have two problems:

  • dashboard shows double the used space (1.21T, actual size on disk 665G)
  • I get several errors like this in logs:
2025-10-02T19:34:20Z    INFO    piecemigrate:chore      couldn't migrate   {"Process": "storagenode", "error": "opening the old reader: pieces error: invalid piece file for storage format version 1: too small for header (0 < 512)", "errorVerbose": "opening the old reader: pieces error: invalid piece file for storage format version 1: too small for header (0 < 512)\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).migrateOne:335\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).processQueue:277\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).Run.func2:184\n\tstorj.io/common/errs2.(*Group).Go.func1:23", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "id": "7HCFYRYQ7ZZR2KA7VU3XLMMMARY5DZ25ZKV3UE4L7K32UX4676TQ"}

My hashstore/meta contents:

[22:35] user@server /storj/3/storage/storage/hashstore/meta
❯ cat *.migrate
{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}
{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}
{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}
{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}
[22:35] user@server /storj/3/storage/storage/hashstore/meta
❯ cat *.migrate_chore
truetruetruetrue⏎  

Should I be concerned?