Hashstore Migration Guide

Some SNOs are going to aim for near-zero unpaid space… and tune their potato nodes to be io-bound 24x7… then come complain here in the forum.

That’s why we can’t have nice things :wink:

That’s why I created the video :wink:

Just tried to make it clear: it’s not a must have for everybody, to learn the internals. It’s an option for enthusiast…

And while I am trying to support this (I love details), I would like to make the defaults smart enough for average SNOs.

6 Likes

It seems, on default settings that a node will accumulate in excess of 20% “unpaid space”, and this excess space is causing problems when nodes approach drive capacity.

I have one node (with one disk) that is often maxed out at 100% IO. it’s about 5.6TB of data and it’s using memtable. partially migrated to hashstore except US1 is partially or mostly pieces blobs.

Anyway, I just started experimenting with
hashstore.compaction.probability-power and hashstore.compaction.alive-fraction in config.yaml do see if there is a difference.

I have migrated two of my nodes, but my last node is almost 8TB. Anyway to speed up migration? It’s a 4-wide Raidz1 setup, so it can handle a bit more IOPS, than a single disk.

I’ve set this command which helped a bit and increased throughput to about 40mb/s

  • –hashstore.store.flush-semaphore=8

You may add it to the wiki (the first post).

I have a few questions about migrating to hashstore and what happens afterwards.

As described in my first post, I first activated passive migration and then active migration for one node.
After everything was migrated, I emptied the blobs folder. So far, so good.

Then I noticed that the node was showing about twice as much used space as it actually had.
As suggested, I stopped the node, deleted the used_space_per_prefix.db, and restarted the node.
Now the correct value seems to be there (I haven’t double-checked it yet, but it has roughly halved).
What I don’t understand, however, is why the used space filewalker no longer starts?

The lazyfilewalker.trash-cleanup-filewalker runs as usual, but not the used space filewalker.
Here are the logs since node start:

Logs
2025-10-29T07:50:19+01:00	INFO	Got a signal from the OS: "terminated"	\{"Process": "storagenode"\}\
2025-10-29T07:52:07+01:00	INFO	Configuration loaded	\{"Process": "storagenode", "Location": "/app/config/config.yaml"\}\
2025-10-29T07:52:07+01:00	INFO	Anonymized tracing enabled	\{"Process": "storagenode"\}\
2025-10-29T07:52:07+01:00	INFO	Operator email	\{"Process": "storagenode", "Address": ""\}\
2025-10-29T07:52:07+01:00	INFO	Operator wallet	\{"Process": "storagenode", "Address": ""\}\
2025-10-29T07:52:07+01:00	INFO	db	database does not exist	\{"Process": "storagenode", "database": "used_space_per_prefix"\}\
2025-10-29T07:52:07+01:00	INFO	server	kernel support for server-side tcp fast open remains disabled.	\{"Process": "storagenode"\}\
2025-10-29T07:52:07+01:00	INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	\{"Process": "storagenode"\}\
2025-10-29T07:52:08+01:00	INFO	hashstore	hashstore opened successfully	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "open_time": "1.238939303s"\}\
2025-10-29T07:52:12+01:00	INFO	hashstore	hashstore opened successfully	\{"Process": "storagenode", "satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "open_time": "3.945428263s"\}\
2025-10-29T07:52:16+01:00	INFO	hashstore	hashstore opened successfully	\{"Process": "storagenode", "satellite": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "open_time": "3.537816873s"\}\
2025-10-29T07:52:16+01:00	INFO	hashstore	hashstore opened successfully	\{"Process": "storagenode", "satellite": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "open_time": "206.780575ms"\}\
2025-10-29T07:52:16+01:00	INFO	Telemetry enabled	\{"Process": "storagenode", "instance ID": "1aPApHGYkk4LxpcPPjaWwPYEN3pJ54hjSsW3XRQsLVVXJ5229K"\}\
2025-10-29T07:52:16+01:00	INFO	Event collection enabled	\{"Process": "storagenode", "instance ID": "1aPApHGYkk4LxpcPPjaWwPYEN3pJ54hjSsW3XRQsLVVXJ5229K"\}\
2025-10-29T07:52:16+01:00	INFO	db.migration.56	Create used_space_per_prefix db	\{"Process": "storagenode"\}\
2025-10-29T07:52:16+01:00	INFO	db.migration.62	Add total_content_size, piece_counts, resume_point columns to used_space_per_prefix table	\{"Process": "storagenode"\}\
2025-10-29T07:52:16+01:00	INFO	db.migration	Database Version	\{"Process": "storagenode", "version": 62\}\
2025-10-29T07:52:17+01:00	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	\{"Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	\{"Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	Node 1aPApHGYkk4LxpcPPjaWwPYEN3pJ54hjSsW3XRQsLVVXJ5229K started	\{"Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	Public server started on [::]:28968	\{"Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	Private server started on 127.0.0.1:7778	\{"Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	collector	expired pieces collection started	\{"Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	trust	Scheduling next refresh	\{"Process": "storagenode", "after": "5h21m9.014531218s"\}\
2025-10-29T07:52:17+01:00	INFO	bandwidth	Persisting bandwidth usage cache to db	\{"Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	pieces:trash	emptying trash started	\{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	\{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T07:52:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T07:52:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T07:52:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T07:52:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T07:52:17+01:00	INFO	piecemigrate:chore	all enqueued for migration; will sleep before next pooling	\{"Process": "storagenode", "active": \{"121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6": true, "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S": true, "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs": true, "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE": true\}, "interval": "10m0s"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	\{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T07:52:17+01:00	INFO	collector	expired pieces collection completed	\{"Process": "storagenode", "count": 0\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	\{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "dateBefore": "2025-10-22T08:52:17+02:00"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	\{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	\{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	\{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T07:52:17+01:00	INFO	pieces:trash	emptying trash finished	\{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "elapsed": "28.3318ms"\}\
2025-10-29T07:52:17+01:00	INFO	pieces:trash	emptying trash started	\{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	\{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	\{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	\{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode", "dateBefore": "2025-10-22T08:52:17+02:00"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	\{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	\{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "bytesDeleted": 0, "numKeysDeleted": 0, "Process": "storagenode"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	\{"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T07:52:17+01:00	INFO	pieces:trash	emptying trash finished	\{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "elapsed": "24.163247ms"\}\
2025-10-29T07:52:17+01:00	INFO	pieces:trash	emptying trash started	\{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	\{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T07:52:17+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	\{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	\{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "dateBefore": "2025-10-22T08:52:17+02:00"\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	\{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	\{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	\{"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T07:52:18+01:00	INFO	pieces:trash	emptying trash finished	\{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "elapsed": "25.700511ms"\}\
2025-10-29T07:52:18+01:00	INFO	pieces:trash	emptying trash started	\{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	\{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	\{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker started	\{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "dateBefore": "2025-10-22T08:52:18+02:00"\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	Database started	\{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker.subprocess	trash-filewalker completed	\{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "bytesDeleted": 0, "numKeysDeleted": 0\}\
2025-10-29T07:52:18+01:00	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess finished successfully	\{"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T07:52:18+01:00	INFO	pieces:trash	emptying trash finished	\{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "elapsed": "25.380133ms"\}\
2025-10-29T07:54:53+01:00	INFO	reputation:service	node scores updated	\{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Total Audits": 9478, "Successful Audits": 9259, "Audit Score": 1, "Online Score": 0.9806869358351201, "Suspension Score": 1, "Audit Score Delta": 0, "Online Score Delta": 0, "Suspension Score Delta": 0\}\
2025-10-29T07:54:54+01:00	INFO	reputation:service	node scores updated	\{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Total Audits": 68624, "Successful Audits": 68561, "Audit Score": 1, "Online Score": 0.9875744047619047, "Suspension Score": 1, "Audit Score Delta": 0, "Online Score Delta": 0, "Suspension Score Delta": 0\}\
2025-10-29T07:54:54+01:00	INFO	reputation:service	node scores updated	\{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Total Audits": 429075, "Successful Audits": 422108, "Audit Score": 1, "Online Score": 0.9866732239435354, "Suspension Score": 1, "Audit Score Delta": 0, "Online Score Delta": 0, "Suspension Score Delta": 0\}\
2025-10-29T07:54:54+01:00	INFO	reputation:service	node scores updated	\{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Total Audits": 354785, "Successful Audits": 351913, "Audit Score": 1, "Online Score": 0.9681614928579918, "Suspension Score": 1, "Audit Score Delta": 0, "Online Score Delta": 0, "Suspension Score Delta": 0\}\
2025-10-29T07:56:08+01:00	INFO	hashstore	beginning compaction	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "stats": \{"NumLogs":60,"LenLogs":"42.6 GiB","NumLogsTTL":17,"LenLogsTTL":"0.7 GiB","SetPercent":0.8497073622821466,"TrashPercent":0.14837878436632704,"TTLPercent":0.008886938210603278,"Compacting":false,"Compactions":0,"Today":20390,"LastCompact":0,"LogsRewritten":0,"DataRewritten":"0 B","DataReclaimed":"0 B","DataReclaimable":"6.4 GiB","Table":\{"NumSet":122530,"LenSet":"36.2 GiB","AvgSet":316852.66563290625,"NumTrash":25392,"LenTrash":"6.3 GiB","AvgTrash":266996.34530560806,"NumTTL":1376,"LenTTL":"387.2 MiB","AvgTTL":295096.5581395349,"NumSlots":262144,"TableSize":"16.0 MiB","Load":0.46741485595703125,"Created":20388,"Kind":0\},"Compaction":\{"Elapsed":0,"Remaining":0,"TotalRecords":0,"ProcessedRecords":0\}\}\}\
2025-10-29T07:56:08+01:00	INFO	hashstore	compact once started	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "today": 20390\}\
2025-10-29T07:56:08+01:00	INFO	hashstore	compaction computed details	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "nset": 107322, "nexist": 122919, "modifications": true, "curr logSlots": 18, "next logSlots": 18, "candidates": [71, 74, 68], "rewrite": [68, 74], "duration": "111.954172ms"\}\
2025-10-29T07:56:08+01:00	INFO	hashstore	records rewritten	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "records": 0, "bytes": "0 B", "duration": "7.876887ms"\}\
2025-10-29T07:56:09+01:00	INFO	hashstore	hashtbl rewritten	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "duration": "563.626959ms", "total records": 107322, "total bytes": "33.1 GiB", "rewritten records": 0, "rewritten bytes": "0 B", "trashed records": 1628, "trashed bytes": "0.9 GiB", "restored records": 0, "restored bytes": "0 B", "expired records": 15597, "expired bytes": "3.0 GiB", "reclaimed logs": 2, "reclaimed bytes": "373.9 MiB"\}\
2025-10-29T07:56:09+01:00	INFO	hashstore	compact once finished	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "duration": "684.938044ms", "completed": false\}\
2025-10-29T07:56:09+01:00	INFO	hashstore	compact once started	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "today": 20390\}\
2025-10-29T07:56:09+01:00	INFO	hashstore	compaction computed details	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "nset": 107322, "nexist": 107322, "modifications": false, "curr logSlots": 18, "next logSlots": 18, "candidates": [29, 73], "rewrite": [29], "duration": "13.232695ms"\}\
2025-10-29T07:56:17+01:00	INFO	hashstore	records rewritten	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "records": 1307, "bytes": "347.8 MiB", "duration": "8.089032807s"\}\
2025-10-29T07:56:17+01:00	INFO	hashstore	hashtbl rewritten	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "duration": "278.834277ms", "total records": 107322, "total bytes": "33.1 GiB", "rewritten records": 1307, "rewritten bytes": "347.8 MiB", "trashed records": 0, "trashed bytes": "0 B", "restored records": 0, "restored bytes": "0 B", "expired records": 0, "expired bytes": "0 B", "reclaimed logs": 1, "reclaimed bytes": "1.0 GiB"\}\
2025-10-29T07:56:17+01:00	INFO	hashstore	compact once finished	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "duration": "8.382556491s", "completed": false\}\
2025-10-29T07:56:17+01:00	INFO	hashstore	compact once started	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "today": 20390\}\
2025-10-29T07:56:17+01:00	INFO	hashstore	compaction computed details	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "nset": 107322, "nexist": 107322, "modifications": false, "curr logSlots": 18, "next logSlots": 18, "candidates": [], "rewrite": [], "duration": "13.8889ms"\}\
2025-10-29T07:56:17+01:00	INFO	hashstore	compact once finished	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "duration": "13.904121ms", "completed": true\}\
2025-10-29T07:56:17+01:00	INFO	hashstore	finished compaction	\{"Process": "storagenode", "satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "store": "s0", "duration": "9.081601429s", "stats": \{"NumLogs":58,"LenLogs":"41.5 GiB","NumLogsTTL":16,"LenLogsTTL":"390.7 MiB","SetPercent":0.7980442424923923,"TrashPercent":0.10478320705882307,"TTLPercent":0.01050443830025985,"Compacting":false,"Compactions":0,"Today":20390,"LastCompact":20390,"LogsRewritten":3,"DataRewritten":"347.9 MiB","DataReclaimed":"1.4 GiB","DataReclaimable":"8.4 GiB","Table":\{"NumSet":107322,"LenSet":"33.1 GiB","AvgSet":331570.02426343155,"NumTrash":10638,"LenTrash":"4.4 GiB","AvgTrash":439206.4914457605,"NumTTL":1255,"LenTTL":"446.7 MiB","AvgTTL":373221.0741035857,"NumSlots":262144,"TableSize":"16.0 MiB","Load":0.40940093994140625,"Created":20390,"Kind":0\},"Compaction":\{"Elapsed":0,"Remaining":0,"TotalRecords":0,"ProcessedRecords":0\}\}\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	all enqueued for migration; will sleep before next pooling	\{"Process": "storagenode", "active": \{"121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6": true, "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S": true, "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs": true, "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE": true\}, "interval": "10m0s"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	all enqueued for migration; will sleep before next pooling	\{"Process": "storagenode", "active": \{"12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S": true, "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs": true, "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE": true, "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6": true\}, "interval": "10m0s"\}\

Here is also my docker-compose.

docker-compose
services:\
    storj-node:\
        image: storjlabs/storagenode:latest\
        container_name: "storj-node$\{NODE_ID\}"\
        restart: unless-stopped\
        stop_grace_period: 300s\
        volumes:\
            - type: bind\
              source: "$\{NODE_DATA\}"\
              target: /app/config/storage\
            - type: bind\
              source: "$\{NODE_IDENTITY\}"\
              target: /app/identity\
            - type: bind\
              source: "$\{NODE_CONFIG\}config.yaml"\
              target: /app/config/config.yaml\
            - type: bind\
              source: "$\{NODE_ORDERS\}"\
              target: /app/config/orders\
            - type: bind\
              source: "$\{NODE_DB\}"\
              target: /app/config/db\
            - type: bind\
              source: "$\{NODE_LOGS\}"\
              target: /app/config/logs\
        environment:\
            - "EMAIL=$\{NODE_EMAIL\}"\
            - "ADDRESS=$\{IP\}:$\{STORJ_PORT\}"\
            - "STORAGE=$\{NODE_STORAGE\}"\
            - "WALLET=$\{NODE_WALLET\}"\
            - "SETUP=false"\
            - "AUTO_UPDATE=true"\
            - "TZ=Europe/Berlin"\
        command:\
            - --log.output="/app/config/logs/node.log"\
            - --storage2.database-dir=/app/config/db\
            - --storage2.orders.path=/app/config/orders\
            - --server.address=":$\{STORJ_PORT\}"\
#            - --pieces.enable-lazy-filewalker=false\
#            - --storage2.piece-scan-on-startup=false\
            - --log.custom-level=piecestore=WARN #ab Version 1.99.\
            - --debug.addr=":6000"}

Do you have any idea what the problem is?

Oh, and now I get these entries every 10 minutes:

enqueued for migration
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T08:02:17+01:00	INFO	piecemigrate:chore	all enqueued for migration; will sleep before next pooling	\{"Process": "storagenode", "active": \{"121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6": true, "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S": true, "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs": true, "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE": true\}, "interval": "10m0s"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	enqueued for migration	\{"Process": "storagenode", "sat": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"\}\
2025-10-29T08:12:17+01:00	INFO	piecemigrate:chore	all enqueued for migration; will sleep before next pooling	\{"Process": "storagenode", "active": \{"12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S": true, "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs": true, "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE": true, "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6": true\}, "interval": "10m0s"\}\
\

The migration is complete. Do I now have to undo the entries as described in the first post?

Please ignore the “\” in the logs and the compose. They seem to come from copying.

There is no need for it now node is using hashstore. Deleted blobs (located in trash folder) are not converted, so they will hang around until the node deletes them (upto 7 days after conversion).

Yes, but only the 4x *.migrate_chore files. Set them back to false.

1 Like

Is it necessary? I didn’t change them back (didn’t know I should) and node LOOKS to me like working. What is bad of letting them as they are?

Not necessary. Stops a log entry every 10 minutes about queuing migration, and an io spike as the node searches blobs folder for anything to migrate.

1 Like

Are you sure that the used space filewalker is no longer needed?
Then the used disk space should match the dashboard, but it doesn’t.

du -h --si --max-depth=2
4.1k	./blobs
4.1k	./trash/v4weeab67sbgvnbwd5z7tweqsqqun7qox2agpbxy44mqqaaaaaaa
4.1k	./trash/qstuylguhrn2ozjv4h2c6xpxykd622gtgurhql2k7k75wqaaaaaa
4.1k	./trash/ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa
4.1k	./trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa
25k	./trash
6.6M	./hashstore/meta
2.2T	./hashstore/12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S
855G	./hashstore/12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs
74G	./hashstore/121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6
59G	./hashstore/1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE
3.2T	./hashstore
4.1k	./temp
3.2T	.

My trash folders have been empty for a long time, so this filewalker is no longer needed.

Or am I fundamentally misunderstanding something here?

Yes, its now understood that hashstore does consume additional amounts of storage. See this discussion - Single node and multinode dashboards shows different values after migration to hashstore

1 Like

Okay, but if I understand correctly, we’re looking at a difference of about 200 GB for a node that has 2.83 TB of data + 170 GB of trash, which is quite a lot.

There is not much we can do about migration speed. Even on full ssd nodes throughput was not higher.

It’s not linear.

Likely the not yet compacted garbage, the trash amount has an expiration date, after that date it will become ready for the compaction too.

My nodes on default settings are running at roughly 20% overhead above what node says.
Node that I cleaned out with aggressive compaction settings, is slowly growing after returning to default settings.

Yes, I assume so, but I have taken the 170 GB of trash into account.
du --si says that the node occupies 3.2 TB.
The node says it occupies 2.83 TB and has 0.17 TB of trash. That leaves another 0.2 TB that is simply occupied, which I don’t understand.

  3,2
- 2,83
- 0,17
= 0,2

So the node occupies 200 GB more than it reports. Is that intentional? Is it due to technical reasons or is it an error?

This is explained there:

OK, thank you. Wouldn’t it be a good idea to let FileWalker determine the actual size occupied by the file system? Because the dashboard always displays incorrect values.

I noticed something else, and I read somewhere that this is not good.
I use Portainer to manage the node stacks. But when I stop the node, I see the following entry in the logs:
“Got a Signal from the OS: ”terminated“”
However, when the node is performing an update, for example, it says:
“Got a Signal from the OS: ”interrupt“”

How can I get Portainer to stop the node with an “interrupt” as well?

It’s not used for hashstore. It will not be fixed. You may use a multinode dashboard, it will show the free space, which your node reporting to the satellites.

The used space should be corrected itself after a while, especially if you removed the prefixes database and restarted the node.

You need to specify a timeout to 300s. The clause under the service is stop_grace_period.