[Tech Preview] Hashstore backend for storage nodes

I think he’s just talking about the setting in the .migrate_chore file for active migration and not the other .migrate file that has the options for passive migration, WriteToNew, ttl, etc.

Also, Is anybody seeing their hashstore node still showing trash even after a trash restore?

2 Likes

I only deleted the .migrate_chore files and restarted the node, the rest of the stuff was left enabled.
But as you are saying, do not touch it after it was enabled :slight_smile:.

1 Like

Rolled back from v1.123.4 to v1.122.10. Disks was busy by compactions almost 24/7. I’ve moved on to hashstore to avoid this.

Also, is such collisions normal?

2025-02-12T19:23:37+03:00       ERROR   piecestore      upload failed   {"Process": "storagenode", "Piece ID": "2NZDZSG53KNML5OXXVTEC73QNUVCFCFJ333YRIOHRC6O72UBHBGQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT_REPAIR", "Remote Address": "XXXXXX:XXXX", "Size": 35584, "error": "hashstore: put:{key:d3723cc8ddda9ac5f5d7bd66417f706d2a2288a9def788a1c788bcefea81384d offset:122041600 log:973 length:36096 created:20131 (2025-02-12) expires:0 (1970-01-01) trash:false} != exist:{key:d3723cc8ddda9ac5f5d7bd66417f706d2a2288a9def788a1c788bcefea81384d offset:207766080 log:61 length:36096 created:20122 (2025-02-03) expires:20135 (2025-02-16) trash:true}: collision detected", "errorVerbose": "hashstore: put:{key:d3723cc8ddda9ac5f5d7bd66417f706d2a2288a9def788a1c788bcefea81384d offset:122041600 log:973 length:36096 created:20131 (2025-02-12) expires:0 (1970-01-01) trash:false} != exist:{key:d3723cc8ddda9ac5f5d7bd66417f706d2a2288a9def788a1c788bcefea81384d offset:207766080 log:61 length:36096 created:20122 (2025-02-03) expires:20135 (2025-02-16) trash:true}: collision detected\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).Insert:466\n\tstorj.io/storj/storagenode/hashstore.(*Store).addRecord:380\n\tstorj.io/storj/storagenode/hashstore.(*Writer).Close:309\n\tstorj.io/storj/storagenode/piecestore.(*hashStoreWriter).Commit:341\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload.func6:435\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:507\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}
2025-03-03T12:12:08+03:00       ERROR   piecestore      upload failed   {"Process": "storagenode", "Piece ID": "44R7WHUWY34OUL7QFHFIZKA5J3GO6K5ZXDYJQUYI356WBH3IJKKA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT_REPAIR", "Remote Address": "XXXXXX:XXXX", "Size": 1280, "error": "hashstore: put:{key:e723fb1e96c6f8ea2ff029ca8ca81d4eccef2bb9b8f0985308df7d609f684a94 offset:289752768 log:505 length:1792 created:20150 (2025-03-03) expires:0 (1970-01-01) trash:false} != exist:{key:e723fb1e96c6f8ea2ff029ca8ca81d4eccef2bb9b8f0985308df7d609f684a94 offset:177896384 log:264 length:1792 created:20143 (2025-02-24) expires:0 (1970-01-01) trash:false}: collision detected", "errorVerbose": "hashstore: put:{key:e723fb1e96c6f8ea2ff029ca8ca81d4eccef2bb9b8f0985308df7d609f684a94 offset:289752768 log:505 length:1792 created:20150 (2025-03-03) expires:0 (1970-01-01) trash:false} != exist:{key:e723fb1e96c6f8ea2ff029ca8ca81d4eccef2bb9b8f0985308df7d609f684a94 offset:177896384 log:264 length:1792 created:20143 (2025-02-24) expires:0 (1970-01-01) trash:false}: collision detected\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).Insert:492\n\tstorj.io/storj/storagenode/hashstore.(*Store).addRecord:393\n\tstorj.io/storj/storagenode/hashstore.(*Writer).Close:300\n\tstorj.io/storj/storagenode/piecestore.(*hashStoreWriter).Commit:357\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload.func6:434\n\tstorj.io/storj/storagenode/piecestore.(*Endpoint).Upload:506\n\tstorj.io/common/pb.DRPCPiecestoreDescription.Method.func1:294\n\tstorj.io/drpc/drpcmux.(*Mux).HandleRPC:33\n\tstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC:62\n\tstorj.io/common/experiment.(*Handler).HandleRPC:43\n\tstorj.io/drpc/drpcserver.(*Server).handleRPC:166\n\tstorj.io/drpc/drpcserver.(*Server).ServeOne:108\n\tstorj.io/drpc/drpcserver.(*Server).Serve.func2:156\n\tstorj.io/drpc/drpcctx.(*Tracker).track:35"}

Here is now the result of that internal discussion: https://review.dev.storj.io/c/storj/storj/+/16517

The current solution is to make the hashtable small enough to keep it in memory. Should give us even better performance.

4 Likes

How much RAM is needed per TB storage to keep the tables in memory?

I really like how the whole thought process is laid out in easy to understand words. Thank you Jeff :slight_smile:

1 Like

1.124.0-rc not working on my windows test node. Switched back to 1.123.4.

2025-03-05T19:37:48+01:00       ERROR   failure during run      {"error": "Failed to create storage node peer: hashstore: invalid header checksum: 0 != fb79c4b542a56ea2\n\tstorj.io/storj/storagenode/hashstore.ReadTblHeader:87\n\tstorj.io/storj/storagenode/hashstore.OpenHashtbl:132\n\tstorj.io/storj/storagenode/hashstore.NewStore:250\n\tstorj.io/storj/storagenode/hashstore.New:94\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).getDB:248\n\tstorj.io/storj/storagenode/piecestore.NewHashStoreBackend:114\n\tstorj.io/storj/storagenode.New:598\n\tmain.cmdRun:84\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78", "errorVerbose": "Failed to create storage node peer: hashstore: invalid header checksum: 0 != fb79c4b542a56ea2\n\tstorj.io/storj/storagenode/hashstore.ReadTblHeader:87\n\tstorj.io/storj/storagenode/hashstore.OpenHashtbl:132\n\tstorj.io/storj/storagenode/hashstore.NewStore:250\n\tstorj.io/storj/storagenode/hashstore.New:94\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).getDB:248\n\tstorj.io/storj/storagenode/piecestore.NewHashStoreBackend:114\n\tstorj.io/storj/storagenode.New:598\n\tmain.cmdRun:84\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78\n\tmain.cmdRun:86\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.cleanup.func1.4:392\n\tstorj.io/common/process.cleanup.func1:410\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:112\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/common/process.ExecWithCustomConfig:72\n\tstorj.io/common/process.Exec:62\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}

Fix incoming: https://review.dev.storj.io/c/storj/storj/+/16580

This can happen from time to time. There was a chain of commits. Some easy commits at the beginning of the chain got merged early on but not the last more complex commit on that chain. Takes more time to finish that one. In the meantime this fix will get us back to a releaseable state.

2 Likes

This:

# how long to wait between pooling satellites for active migration
storage2migration.interval: 1h0m0s

only works for unmigrated or fully migrated nodes. Meaning this line:

2025-03-06T13:59:20Z    INFO    piecemigrate:chore      enqueued for migration  {"Process": "storagenode", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2025-03-06T14:59:14Z    INFO    piecemigrate:chore      enqueued for migration  {"Process": "storagenode", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}

It’s ignored for active migration:

2025-03-06T13:21:38Z    INFO    piecemigrate:chore      processed a bunch of pieces     {"Process": "storagenode", "successes": 930000, "size": 172538733056}
2025-03-06T13:31:57Z    INFO    piecemigrate:chore      processed a bunch of pieces     {"Process": "storagenode", "successes": 940000, "size": 174328501248}

In the meantime my node has made it through all the compactions. There was an important code change and you might not want to miss out on it.

Old behavior was:
Only compact log files with at least 25% garbage in it. → most of the time nothing to compact and one day the node might hit a big compaction wall and need to rewrite all the log files.

New behavior is:
See compact more as a resource over time. Order the log files by the amount of garbage and than use a price function to determin how many log files we compact this time. Unlike the old behavior this will always compact some log files with the advantage that it brings down the total overhead.

The transition from old to new behavior is a bit painful because the log files the old implementation ignored are now a reason for the new implementation to freak out. Holy shit what have I done. Why are there so many log files with a relative high amount of garbage in it? Under my watch that is not suppost to happen so lets fix it. Once the compact job has fixed that it will calm down and keep the compacts running on a more steady state.

7 Likes

Sooo… we do nothing and let the node do it’s thing, I suppose.

1 Like

Hello out there!

I have a 10TB node (5.3TB used, 4,4TB in average - this is not equal, althrough i have arround 5TB since 2 month, but this is an other topic)

I searched the forum for migrating to hashstore, but I have only found an tutorial, which is from last year (The first post here). Is there a new version, or a good documented code, which I can execute? I am running storj on docker on a linux machine.

What is better? To have all new incoming files be stored in the hashstore, or convert full node?

I am using 1.123.4.

Do i only need to change the .migrate files all values from the 4 to true

{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}

or do i need to enable something in the docker start?

The first port is the tutorial. You don’t need to change anything in config or docker run.
Do the commands and restart the node.

# Change this path to match your storage node location
cd /mnt/hdd1p1/Storj1/storage/hashstore/meta/

echo '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}' > 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6.migrate
echo '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}' > 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S.migrate
echo '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}' > 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs.migrate
echo '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}' > 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE.migrate

echo -n 'true' > 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6.migrate_chore
echo -n 'true' > 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S.migrate_chore
echo -n 'true' > 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs.migrate_chore
echo -n 'true' > 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE.migrate_chore

Check the progress in logs, info mode:

docker logs storagenode 2>&1 | grep "piecemigrate:chore"

When the migration finishes after a month or so, stop the node, delete all databases, restart the node, to correct de dashboard.

2 Likes

You don’t really need to delete databases. I just restarted my node, so the filewalker can run. It corrected all mistakes and the dashboard showed corrected values again

1 Like

I keep seeing this in my log, for a fully migrated 1.5TB node, ver. v1.123.4, Ubuntu+Docker, 32GB RAM, 1node. What dose it do and why does it do it non stop for the last 30 min? My drive is crunching endlessly.

2025-03-11T20:49:10Z    INFO    hashstore       hashtbl rewritten       {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "total records": 2031487, "total bytes": "459.4 GiB", "rewritten records": 3485, "rewritten bytes": "610.6 MiB", "trashed records": 0, "trashed bytes": "0 B", "restored records": 0, "restored bytes": "0 B", "expired records": 0, "expired bytes": "0 B"}
2025-03-11T20:49:10Z    INFO    hashstore       compact once finished   {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "duration": "32.446330339s", "completed": false}
2025-03-11T20:49:10Z    INFO    hashstore       compact once started    {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "today": 20158}
2025-03-11T20:49:10Z    INFO    hashstore       compaction computed details     {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "nset": 2031487, "nexist": 2031487, "modifications": false, "curr logSlots": 22, "next logSlots": 22, "candidates": [129, 75, 212, 100, 155, 183, 151, 172, 243, 205, 28, 226, 53, 152, 301, 222, 74, 238, 189, 63, 245, 159, 150, 146, 162, 160, 295, 213, 302, 191, 49, 107, 85, 248, 169, 135, 122, 156, 166, 81, 261], "rewrite": [63], "duration": "335.165545ms"}
2025-03-11T20:49:31Z    INFO    hashstore       hashtbl rewritten       {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "total records": 2031487, "total bytes": "459.4 GiB", "rewritten records": 2416, "rewritten bytes": "615.2 MiB", "trashed records": 0, "trashed bytes": "0 B", "restored records": 0, "restored bytes": "0 B", "expired records": 0, "expired bytes": "0 B"}
2025-03-11T20:49:31Z    INFO    hashstore       compact once finished   {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "duration": "20.537926123s", "completed": false}
2025-03-11T20:49:31Z    INFO    hashstore       compact once started    {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "today": 20158}
2025-03-11T20:49:31Z    INFO    hashstore       compaction computed details     {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "nset": 2031487, "nexist": 2031487, "modifications": false, "curr logSlots": 22, "next logSlots": 22, "candidates": [162, 53, 155, 18, 89, 144, 291, 166, 202, 61, 98, 49, 59, 5, 272, 207, 47, 172, 107, 39, 110, 102, 147, 142, 213, 85, 194, 22, 25, 31, 13, 77, 148], "rewrite": [272], "duration": "327.361028ms"}
2025-03-11T20:50:04Z    INFO    hashstore       hashtbl rewritten       {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "total records": 2031487, "total bytes": "459.4 GiB", "rewritten records": 3339, "rewritten bytes": "0.7 GiB", "trashed records": 0, "trashed bytes": "0 B", "restored records": 0, "restored bytes": "0 B", "expired records": 0, "expired bytes": "0 B"}
2025-03-11T20:50:04Z    INFO    hashstore       compact once finished   {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "duration": "33.470132945s", "completed": false}
2025-03-11T20:50:04Z    INFO    hashstore       compact once started    {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "today": 20158}
2025-03-11T20:50:04Z    INFO    hashstore       compaction computed details     {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "nset": 2031487, "nexist": 2031487, "modifications": false, "curr logSlots": 22, "next logSlots": 22, "candidates": [89, 161, 223, 147, 97, 209, 81, 110, 30, 190, 159, 222, 58, 166, 22, 13, 135, 265, 150, 50, 87, 221, 14, 90, 114, 189, 167, 59, 253, 164, 104, 191, 151, 256, 210, 18, 98, 169, 208, 74, 92, 125, 119], "rewrite": [13], "duration": "324.498437ms"}
2025-03-11T20:50:22Z    INFO    hashstore       hashtbl rewritten       {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "total records": 2031487, "total bytes": "459.4 GiB", "rewritten records": 2554, "rewritten bytes": "609.8 MiB", "trashed records": 0, "trashed bytes": "0 B", "restored records": 0, "restored bytes": "0 B", "expired records": 0, "expired bytes": "0 B"}

It started here:

2025-03-11T20:20:10Z    INFO    hashstore       beginning compaction    {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "stats": {"NumLogs":613,"LenLogs":"573.3 GiB","NumLogsTTL":78,"LenLogsTTL":"44.7 GiB","SetPercent":0.9990557414583201,"TrashPercent":0.2275130766712878,"Compacting":false,"Compactions":1,"TableFull":0,"Today":20158,"LastCompact":20154,"LogsRewritten":17,"DataRewritten":"0 B","Table":{"NumSet":2676991,"LenSet":"572.7 GiB","AvgSet":229717.37643944265,"NumTrash":570824,"LenTrash":"130.4 GiB","AvgTrash":245332.5526887447,"NumSlots":8388608,"TableSize":"512.0 MiB","Load":0.31912219524383545,"Created":20154},"Compaction":{"Elapsed":0,"Remaining":0,"TotalRecords":0,"ProcessedRecords":0}}}
2025-03-11T20:23:46Z    INFO    hashstore       compaction acquired locks       {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "duration": "3m36.057967936s"}
2025-03-11T20:23:46Z    INFO    hashstore       compact once started    {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "today": 20158}
2025-03-11T20:23:48Z    INFO    hashstore       compaction computed details     {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "nset": 2031487, "nexist": 2676995, "modifications": true, "curr logSlots": 23, "next logSlots": 22, "candidates": [130, 91, 201, 134, 443, 291, 154, 473, 432, 274, 118, 180, 142, 125, 186, 141, 262, 290, 287, 382, 53, 279, 188, 298, 181, 254, 16, 177, 13, 74, 239, 272, 212, 46, 8, 29, 381, 413, 119, 371, 374, 455, 407, 45, 461, 406, 172, 168, 442, 100, 300, 422, 463, 209, 56, 408, 18, 253, 173, 373, 48, 128, 68, 232, 257, 467, 22, 167, 395, 409, 171, 109, 375, 431, 472, 434, 196, 59, 372, 265], "rewrite": [375, 382, 455, 407, 467, 395, 128, 372, 46, 381, 473, 239, 374, 463, 408, 472, 443, 461, 371, 413, 279, 406, 422, 373, 431, 434, 442, 409, 432], "duration": "2.235317701s"}
2025-03-11T20:23:51Z    INFO    hashstore       hashtbl rewritten       {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "total records": 2031487, "total bytes": "459.4 GiB", "rewritten records": 0, "rewritten bytes": "0 B", "trashed records": 464026, "trashed bytes": "42.6 GiB", "restored records": 0, "restored bytes": "0 B", "expired records": 645508, "expired bytes": "113.1 GiB"}
2025-03-11T20:23:54Z    INFO    hashstore       compact once finished   {"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "store": "s1", "duration": "7.644854218s", "completed": false}

Edit: Ouh, I see someone else stumbled upon this and littleskunk offered the explanation. Now I understand what he was reffering to.
Finnaly finished after 2 hours.

1 Like

Should two commands be given for each satellite?

echo '{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}' > 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6.migrate
echo -n 'true' > 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6.migrate_chore

Together or second after the first one finished?

How to activate migration on windows? if i have multiple nodes on windows, then i need to somehow take into account private address of node?
not docker

Yes, first one if for passive migration - to migrate data on read I believe, and to use the new backend for reads and writes.
The other one is to actively migrate the data from old to new.

You need both if you plan to do an active migration. To be more precise these commands are only creating/modifying configuration files, so they will complete immediately, but for active migration both files have to be present.
And you can safely do both for all the satellites at once, as it will migrate one satellite at a time as I understand.
Also, you will have to restart the node after those files will be in place I think (at least I did).

2 Likes

I would say it would be the same on Windows.
You need to create/edit the files in the /data/hashstore/meta directory.

Files:

121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6.migrate
12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S.migrate
12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs.migrate
1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE.migrate

Content:

{"PassiveMigrate":true,"WriteToNew":true,"ReadNewFirst":true,"TTLToNew":true}


Files:

121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6.migrate_chore
12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S.migrate_chore
12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs.migrate_chore
1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE.migrate_chore

Content:

true
2 Likes

and it will start to work after restart?