Hashstore Migration Guide

Then I’ll wait a week or two and see if the deviation decreases. Thank you.

I already have this in Docker Compose, so am I also safe with “terminated”?

services:
    storj-node:
        image: storjlabs/storagenode:latest
        container_name: “storj-node${NODE_ID}”
        restart: unless-stopped
        stop_grace_period: 300s
1 Like

+1 for the answer from Alexey, just adding some more details:

One of the benefit of hashstore is eliminating all the walkers (other one is speed, as it can work on ext4, event with high number of pieces.

As we have a dedicated “metadata database” with all piece ids and size, it’s enough to check it for the used size.

Also: as the records are spread across the database randomly, technically the code just check the beginning of the database, and estimate the full used space…)

But this is not the “used space”, this is the “useful space”. The sum of the pieces what we store.

We have an other metrics: the size of the log files on the disks.

The overhead on select is usually 3-5 % (we used STORJ_HASHSTORE_COMPACTION_ALIVE_FRACTION=0.6, we recently bumped to 0.7. But usually it’s too high.). But the structure of data on select is different (eg. TTL vs non TTL ratio)

In case you use Grafana / Prometheus, this is how we monitor the overhead:

1-(sum by(environment_name, server_group) (hashstore{environment_name="${environment_name}", field="LenSet", db!="s0", db!="s1"}) / sum by(environment_name, server_group) (hashstore{environment_name="${environment_name}", field="LenLogs", db!="s0", db!="s1"}))

But back to your question:

TLDR;

  1. No, we don’t need the walkers for hashstore
  2. But we can fix UI. I agree with you, it can be confusing as it doesn’t show the details of dead bytes which can be deleted by the next compaction. I think this category should also be added.

Just created a backlog item: Display dead bytes of Hashstore on Storagenode console · Issue #7682 · storj/storj · GitHub

6 Likes

May I ask when the feature will be enabled by default or I have to do the migration manually like this?

It’s happened, but only a passive option - to store pieces to hashstore by default. The rollout has been suspended due to a problem with a memory overusage on Windows. This has been fixed meanwhile and helped. I do not have any information, would we continue a rollout or not.
But you may do it yourself any time. It will happen eventually.

We continue the rollout after the windows fix is rolled out. Last bump happened yesterday.

Current values (for WriteToNew):

SLC: 100 %
AP1: 100 %
EU: 25 %
US: 5 %

Full 100% here would mean that it’s default for all new nodes.

Active migration has not yet been started. (And I would prefer to implement few usability improvements first).

Personally I hope that we can fully switch to hashstore by the end of 2026 Q1 or Q2. But nothing has been decided, yet.

4 Likes

windows fix was rolled out on 1.139.6 according to version.storj.io it 100% already.

so do I need to do something like manual config updating? or I can just leave it?

No, it’s not required. Or you may do it - it’s up on you.

I have 3 files remaining after migration:

 ⚡  /m/s/d/s/blobs  find . -type f -size +0
./qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa/nc/zkauvmzzv5zljdckhxpsdec4675slckxexdlt6iyxoxamdpisq.sj1
./qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa/l7/slnnmbpuxhk6trimna2gyhh32bpexoqs3emu7phgz5mdhx6i5a.sj1
./ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa/3l/hc4f2pwzvpoj6m5wzj32tj5sqfxybdpat7fr7lbpcvz5d44dia.sj1

Are these just corruption in my node? Shall I delete them? What’s the worst that can happen?

Yes. It is save to delete everything in folder blobs but not the folder itself.

Most likely those files are not even registered with satellite. And even if they are registered they are damaged and will fail audit. So there is no reason to keep those files.

3 Likes

Can you run both active & passive migration at the same time? I’ve got both enabled and whenever a download/upload is hit it always logs “piecestore download started”.

They are both running. I believe that log entrie is common to both backends.

2 Likes

Thank you for clearing that up!

Is there an easy way to know once migration has completed on my node?

Yes. You won’t see log entries with “migrated xxx pieces” anymore.

2 Likes

Hmm, it looks like it has finished, but I’m still getting errors like this:

2025-11-19T14:46:21Z    INFO    piecemigrate:chore      enqueued for migration  {"Process": "storagenode", "sat": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2025-11-19T14:47:33Z    INFO    piecemigrate:chore      couldn't migrate        {"Process": "storagenode", "error": "opening the old reader: pieces error: invalid piece file for storage format version 1: too small for header (0 < 512)", "errorVerbose": "opening the old reader: pieces error: invalid piece file for storage format version 1: too small for header (0 < 512)\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).migrateOne:335\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).processQueue:277\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).Run.func2:184\n\tstorj.io/common/errs2.(*Group).Go.func1:23", "sat": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "id": "DARLS3PXUQBLHVO2BGAH446E2ZVKHBCMJCIIYHYWLTPLVZ2OGJVA"}
2025-11-19T14:48:57Z    INFO    piecemigrate:chore      couldn't migrate        {"Process": "storagenode", "error": "opening the old reader: pieces error: invalid piece file for storage format version 1: too small for header (0 < 512)", "errorVerbose": "opening the old reader: pieces error: invalid piece file for storage format version 1: too small for header (0 < 512)\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).migrateOne:335\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).processQueue:277\n\tstorj.io/storj/storagenode/piecemigrate.(*Chore).Run.func2:184\n\tstorj.io/common/errs2.(*Group).Go.func1:23", "sat": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "id": "GSBLIGLO6DYPKMEMJCTGZAVUYZVVK6YM6M32BRAUWP4MR6AXHGPA"}
2025-11-19T14:58:22Z    INFO    piecemigrate:chore      enqueued for migration  {"Process": "storagenode", "sat": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2025-11-19T14:59:30Z    INFO    piecemigrate:chore      enqueued for migration  {"Process": "storagenode", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}

What should I do? Is it safe to delete the files inside of the blobs folder?

It looks like you have a few corrupt / emply (0-size) files left.
Generally you are safe to delete the piecestore folders and 0 size files after completed migration. But not the top level sat folders.

Only the empty blobs folder must be kept. Anything inside can be deleted.

3 Likes

Hello!
I am running three nodes under synology docker 10TB,10TB,6TB.
Separate etx4 disks per node, and the db and badger cache are on a raid1 organized ssd array.
I don’t want to mess it up so please help me how I can switch to the hashstore system.
My main goal, which is to force it to stop using ssd without losing performance, is to use these drives for something else.

There is 32GB of memory in synology and I also use it for other purposes, so I can only allow higher memory usage for one node.

Please help me how to add docker compose and plain docker command line to my current configurations.
Is my plan possible without reducing my income without using ssd?

Thank you

You stress too much about performance. I used to ran 2 nodes of 7TB each on a Synology with 1GB RAM. All files were on the storage drives, no need for SSD.
You can activate the hashstore folowing the official guide and once migrated, deactivate de badger cache and move back the dbs to storage drives.

…or just unwind the db+badger now… and do nothing. Storj will initiate the hashstore migration from their side when they feel the time is right.