Fatal error: concurrent map read and map write

This does not seem like a good thing ! Latest version v1.111.4 on an
old mac and an old version of docker. Badger is enabled but may or not be related. The system restarted and works fine for now.
I have the stack traces if someone wants to look at them.

That’s the least you can share, since this title is saying just nothing.

And why not update your docker, and see whether the problem occurs again?

They still recommend an old version of docker and since the mac is old it cannot run the latest version.
The error was in the node go code so docker probably not relevant.
Never seen this before in years of operation so maybe badger or perhaps really rare race condition.

I see:

But I actually don’t see the log lines, so nobody will be able to help you and it also won’t contribute to any bugfixing this way. So, it would be helpful to give the exact error log line and the 50 lines or so preceding it.

the title was the exact log line

preceding lines were just normal download and upload lines
following was thousands of stack trace lines, way too much to upload unless a developer ask for some of it.

mostly just asking if this had been seen before or is it just me.

Please do. I am a developer. You can use a pastebin, like https://gist.github.com/, if the forum does not accept the whole log file.

1 Like

The whole trace is close to 8100 lines

First goroutine

goroutine 3075668 [running]:
storj.io/storj/storagenode/orders.(*FileStore).getWritableUnsent(0xc000a26070, {0xc000056480, 0x14}, {0xa2, 0x8b, 0x4f, 0x4, 0xe1, 0xb, 0xae, …}, …)
/go/src/storj.io/storj/storagenode/orders/store.go:150 +0xa7
storj.io/storj/storagenode/orders.(*FileStore).BeginEnqueue.func1(0xc01d4509a0)
/go/src/storj.io/storj/storagenode/orders/store.go:132 +0x32d
storj.io/storj/storagenode/piecestore.(*Endpoint).beginSaveOrder.func1({0x25b3ae0, 0xc000ba5720}, 0xc04d340780, 0xc0243aa910)
/go/src/storj.io/storj/storagenode/piecestore/endpoint.go:966 +0xfb
storj.io/storj/storagenode/piecestore.(*Endpoint).Upload(0xc000b703c0, {0x25bb9f0, 0xc0046f8cb0})
/go/src/storj.io/storj/storagenode/piecestore/endpoint.go:588 +0x1d66
storj.io/common/pb.DRPCPiecestoreDescription.Method.func1({0x20346e0?, 0xc000b703c0}, {0xc018eaaba0?, 0x1d?}, {0x1e580c0?, 0xc00608efe0}, {0xc00608efe0?, 0xc002bdeb88?})
/go/pkg/mod/storj.io/common@v0.0.0-20240812101423-26b53789c348/pb/piecestore2_drpc.pb.go:294 +0x134
storj.io/drpc/drpcmux.(*Mux).HandleRPC(0xc01577ea88?, {0x25b6cc0, 0xc00608efe0}, {0xc018eaaba0, 0x1d})
/go/pkg/mod/storj.io/drpc@v0.0.35-0.20240709171858-0075ac871661/drpcmux/handle_rpc.go:33 +0x207
storj.io/common/rpc/rpctracing.(*Handler).HandleRPC(0xc000e143a8, {0x25b7040, 0xc00608efa0}, {0xc018eaaba0, 0x1d})
/go/pkg/mod/storj.io/common@v0.0.0-20240812101423-26b53789c348/rpc/rpctracing/handler.go:62 +0x2e3
storj.io/common/experiment.(*Handler).HandleRPC(0xc00b228130, {0x25b7140, 0xc0038a4508}, {0xc018eaaba0, 0x1d})
/go/pkg/mod/storj.io/common@v0.0.0-20240812101423-26b53789c348/experiment/import.go:43 +0x156
storj.io/drpc/drpcserver.(*Server).handleRPC(0xc018d96780?, 0xc0038a4508, {0xc018eaaba0?, 0x3879e40?})
/go/pkg/mod/storj.io/drpc@v0.0.35-0.20240709171858-0075ac871661/drpcserver/server.go:166 +0x42
storj.io/drpc/drpcserver.(*Server).ServeOne(0xc0009d3ce0, {0x25b3fb8, 0xc004822030}, {0x25ad0c0?, 0xc000844f00?})
/go/pkg/mod/storj.io/drpc@v0.0.35-0.20240709171858-0075ac871661/drpcserver/server.go:108 +0x1e5
storj.io/drpc/drpcserver.(*Server).Serve.func2({0x25b3fb8?, 0xc004822030?})
/go/pkg/mod/storj.io/drpc@v0.0.35-0.20240709171858-0075ac871661/drpcserver/server.go:156 +0x57
storj.io/drpc/drpcctx.(*Tracker).track(0xc004822030, 0xc021f1a810?)
/go/pkg/mod/storj.io/drpc@v0.0.35-0.20240709171858-0075ac871661/drpcctx/tracker.go:35 +0x25
created by storj.io/drpc/drpcctx.(*Tracker).Run in goroutine 2659
/go/pkg/mod/storj.io/drpc@v0.0.35-0.20240709171858-0075ac871661/drpcctx/tracker.go:30 +0x79

Ok, I’m no longer interested, sorry.

1 Like

Hello @scasady,
Welcome to the forum!

Please find the beginning of the error and copy the text starting from the timestamp in the beginning of the error and ending before the first occurrence of \n, it should not be so big.

But I guess that it may be related to a badger cache, because it cannot work with a concurrent access from several processes.

However, if the first goroutine is the first occurrence, then it is related to unsent orders, and yes, the restart should fix the issue, unless the disk has issues. I would recommend to check and fix errors on the disk. By the way, what’s filesystem on that disk?

That error comes from the Golang runtime, and indicates some sort of programming error. It will be quite difficult to find or fix unless you can provide the whole stack dump from your logs, though. Even if it’s 8100 lines.