Write-hashtbl hashstore: collision detected

Hi, I’m using write-hashtbl and run into a new situation:

hashstore: put:{key:1b1bbd32f020502e52e839717e13a486ae0b9f6eff7c2f25001bf1df606bc12e offset:1073162112 log:116 length:2319872 created:20450 (2025-12-28) expires:0 (1970-01-01) trash:false} != exist:{key:1b1bbd32f020502e52e839717e13a486ae0b9f6eff7c2f25001bf1df606bc12e offset:95672576 log:100 length:2319872 created:20450 (2025-12-28) expires:0 (1970-01-01) trash:false}: hashstore: collision detected
	storj.io/storj/storagenode/hashstore.(*HashTbl).insertLocked:384
	storj.io/storj/storagenode/hashstore.(*HashTblConstructor).Append:525
	main.(*cmdRoot).Execute.func2:110
	main.(*cmdRoot).iterateRecords:201
	main.(*cmdRoot).Execute:109
	github.com/zeebo/clingy.(*Environment).dispatchDesc:129
	github.com/zeebo/clingy.Environment.Run:41
	main.main:29
	runtime.main:283

How do I fix this?

Seem like I just need to delete log 64 (decimal 100) and log 74 (decimal 116) it will work again.

This is not a solution. It’s curing headache with a guillotine. The solution would be to find a root cause and fix that, not kill the patient.

2 Likes

This is not a dangerous error. Old piecestore silently ignored double uploads, hashstore doesn’t.

Should be very rare (for example: piece is deleted, and repair tries to re-upload with the same ID).

never delete log files because this. That will delete healthy pieces and will cause disqualification.

2 Likes

It look like a dangerous error, effectively stop node from being online.. how do we properly fix that?

P/s: since I run into this issue first, may I provide node id to prevent my node from being disqualified here?

There should be some other error. Collision itself doesn’t prevent node from being online.

Actually, I just changed the level from ERROR to WARN.

1 Like

I don’t recall any other errors. The logs show that it was stuck in a restart loop with that error. Once I fixed it using write-hashtbl, it went back to normal :slightly_smiling_face:

actually it is Unrecoverable error. I got it also and node wont start.

It should be fixed with these changes:

if updates will go like now, my nodes will be DQ before this updates see the light.

The team is aware of the problem, as you can see by these changes, so they should be included to the new version.
I also think that we back on track, so it should be updated before the node will be disqualified for downtime.

1 Like

Please note, that – in general – we recommend to use the version which is published by https://version.storj.io/, because that’s the tested one which passed all QA test, and also tested in select network for a certain time period.

Currently it (1.142.7) is an older one, because still we are testing new hashstore versions in select.

We are grateful, if somebody is always testing the latest version, but it’s always a higher risk.


After this generic disclaimer, let’s be more specific.

v1.146.6 is just published with all known hashstore fixes. Most of the problems were introduced by v1.146.5 (the first release with fsck support), so regular users can safely wait until do roll out.

For all the pioneers who are willing to help with testing: Please don’t use v1.146.5, only v1.146.6 from the v1.146.

I just deployed it to one select node, so I am also just at the beginning of the testing (but most of the commits are already tested in select, just not the fixes)

(@Vadim: it should also include fix for different space calculation after restart on windows)

1 Like

Thank you i will try, generally today I use 1.146.5 everywhere, and it working fine.

thank you for update. i have 2 nodes that have error, one node service gone live.
but dashboard not loading and log show this error
2026-01-30T12:53:14+02:00 ERROR contact:amnesty failed to get satellite URL for amnesty report {“satellite”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “piece_count”: 2, “error”: “trust: satellite is untrusted”, “errorVerbose”: “trust: satellite is untrusted\n\tstorj.io/storj/storagenode/trust.init:29\n\truntime.doInit1:7670\n\truntime.doInit:7637\n\truntime.main:256”}

how one of the main satellites can be untrusted?
second node has also hashstore table problem, but i cant rebuild it, because write-hashtbl show same collision error and wont rebild it.

A post was merged into an existing topic: Recovering hashstore on Windows

my result is: (node is not running)
hashstore: put:{key:9db4ea9c83d49db0e2fd248a50d91771a6df4febf88f20de2ee3cec1da4ee0dd offset:153484096 log:292 length:2319872 created:20475 (2026-01-22) expires:20496 (2026-02-12) trash:true} != exist:{key:9db4ea9c83d49db0e2fd248a50d91771a6df4febf88f20de2ee3cec1da4ee0dd offset:989348416 log:265 length:2319872 created:20475 (2026-01-22) expires:0 (1970-01-01) trash:false}: hashstore: collision detected
storj.io/storj/storagenode/hashstore.(*HashTbl).insertLocked:411
storj.io/storj/storagenode/hashstore.(*HashTblConstructor).Append:552
main.(*cmdRoot).Execute.func2:111
main.(*cmdRoot).iterateRecords:211
main.(*cmdRoot).Execute:110
github.com/zeebo/clingy.(*Environment).dispatchDesc:129
github.com/zeebo/clingy.Environment.Run:41
main.main:30
runtime.main:290

is this log from what?

1 Like

i executed
write-hashtbl.exe C:\link\1-3\30\hashstore\12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\s0

it runs with following result:
Record count=2501009
Using logSlots=23
The file exists.
main.(*cmdRoot).Execute:97
github.com/zeebo/clingy.(*Environment).dispatchDesc:129
github.com/zeebo/clingy.Environment.Run:41
main.main:30
runtime.main:290

but node is not starting with following error:

Summary

2026-02-13T08:05:12+03:00 FATAL Unrecoverable error {“error”: “Failed to create storage node peer: hashstore: read c:\link\1-3\30\hashstore\12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\s0\meta\hashtbl-000000000000007a: Data error (cyclic redundancy check).\n\tstorj.io/storj/storagenode/hashstore.(*roBigPageCache).ReadRecord:649\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).Range:320\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).loadEntries:264\n\tstorj.io/storj/storagenode/hashstore.OpenHashTbl:176\n\tstorj.io/storj/storagenode/hashstore.OpenTable:120\n\tstorj.io/storj/storagenode/hashstore.NewStore:261\n\tstorj.io/storj/storagenode/hashstore.New:110\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).getDB:271\n\tstorj.io/storj/storagenode/piecestore.NewHashStoreBackend:122\n\tstorj.io/storj/storagenode.New:607\n\tmain.cmdRun:84\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.InitBeforeExecute.func1.2:389\n\tstorj.io/common/process.InitBeforeExecute.func1:407\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:93”, “errorVerbose”: “Failed to create storage node peer: hashstore: read c:\link\1-3\30\hashstore\12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S\s0\meta\hashtbl-000000000000007a: Data error (cyclic redundancy check).\n\tstorj.io/storj/storagenode/hashstore.(*roBigPageCache).ReadRecord:649\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).Range:320\n\tstorj.io/storj/storagenode/hashstore.(*HashTbl).loadEntries:264\n\tstorj.io/storj/storagenode/hashstore.OpenHashTbl:176\n\tstorj.io/storj/storagenode/hashstore.OpenTable:120\n\tstorj.io/storj/storagenode/hashstore.NewStore:261\n\tstorj.io/storj/storagenode/hashstore.New:110\n\tstorj.io/storj/storagenode/piecestore.(*HashStoreBackend).getDB:271\n\tstorj.io/storj/storagenode/piecestore.NewHashStoreBackend:122\n\tstorj.io/storj/storagenode.New:607\n\tmain.cmdRun:84\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.InitBeforeExecute.func1.2:389\n\tstorj.io/common/process.InitBeforeExecute.func1:407\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:93\n\tmain.cmdRun:86\n\tmain.newRunCmd.func1:33\n\tstorj.io/common/process.InitBeforeExecute.func1.2:389\n\tstorj.io/common/process.InitBeforeExecute.func1:407\n\tgithub.com/spf13/cobra.(*Command).execute:985\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:1117\n\tgithub.com/spf13/cobra.(*Command).Execute:1041\n\tstorj.io/common/process.ExecWithCustomOptions:115\n\tstorj.io/common/process.ExecWithCustomConfigAndLogger:80\n\tstorj.io/common/process.ExecWithCustomConfig:75\n\tstorj.io/common/process.Exec:65\n\tmain.(*service).Execute.func1:107\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:93”}

Did you put new made file instead of old in this folder meta?

You didn’t remove the previous hashtbl from the current directory before running the tool. It’s exited without modifications, so the problem was not solved.