High CPU usage by nodes caused by receiving a new bloom filter while still processing the old one

Ubuntu, Docker, 1.101.3

Scenario:

  1. The node receives a bloom filter from the US satellite and starts the gc-filewalker.
2024-04-22T03:18:05Z INFO    retain  Prepared to run a Retain request.       {"Process": "storagenode", "cachePath": "config/retain", "Created Before": "2024-04-10T17:59:59Z", "Filter Size": 1351678, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYLZHeLUtdps3S"}
2024-04-22T03:18:05Z INFO    lazyfilewalker.gc-filewalker    starting subprocess     {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-04-22T03:18:05Z INFO    lazyfilewalker.gc-filewalker    subprocess started      {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-04-22T03:18:05Z INFO    lazyfilewalker.gc-filewalker.subprocess Database started        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-04-22T03:18:05Z INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode", "createdBefore": "2024-04-10T17:59:59Z", "bloomFilterSize": 1351678}
  1. The garbage collection takes a long time (the node is using lazy mode) but hasn’t finished yet when a new bloom filter is received from the US satellite. After that, the node starts to create high CPU load. Additionally, bloom filters from other sattelites were received and processed during gc caused by US bloom from step 1 (not sure if it affects the problem but it should be noticed)

  2. The GC, which started in step 1, finally finished, but for some reason, the node doesn’t remove the old bloom filter and doesn’t start a new GC-filewalker for the bloom filter received in step 2.

No new gc for US satellite after that records in log:

2024-04-28T13:45:38Z   INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker completed {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "piecesSkippedCount": 0, "Process": "storagenode", "piecesCount": 11148904, "trashPiecesCount": 5425545, "piecesTrashed": 5425545}
2024-04-28T13:58:39Z   INFO    lazyfilewalker.gc-filewalker    subprocess finished successfully        {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-04-28T13:59:18Z   INFO    retain  Moved pieces to trash during retain     {"Process": "storagenode", "cachePath": "config/retain", "Deleted pieces": 5425545, "Failed to delete": 0, "Pieces failed to read": 0, "Pieces count": 11148904, "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Duration": "154h41m12.177192812s", "Retain Status": "enabled"}

As a result, we have 2 bloom filters in the retain folder, no GC running, and high CPU load caused by storagenode process. I’ve seen this on many of my nodes and have confirmations from other SNOs.

CONTAINER ID   NAME            CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
5bd8f030ec34   storagenode1    37.09%    427.2MiB / 31.25GiB   1.34%     31.2GB / 70.5GB   3.65TB / 17.6GB   72
11a7ff36c845   storagenode2    45.01%    516.2MiB / 31.25GiB   1.61%     267GB / 95.3GB    2.78TB / 175GB    136
68ff2fe420e6   storagenode3    38.60%    686.8MiB / 31.25GiB   2.15%     2.53GB / 57.9GB   3.3TB / 618MB     140
8a387756a84e   storagenode4    90.21%    650.1MiB / 31.25GiB   2.03%     146GB / 75.1GB    3.04TB / 97.7GB   121
dc3125d0d3b1   storagenode5    48.91%    441MiB / 31.25GiB     1.38%     126GB / 81.3GB    2.19TB / 76GB     186
e7f99bf82505   storagenode6    97.39%    471.2MiB / 31.25GiB   1.47%     398GB / 98.5GB    3.17TB / 232GB    193
d77731e0486a   storagenode7    42.65%    614.5MiB / 31.25GiB   1.92%     513GB / 120GB     3.49TB / 315GB    178
dd657883d73b   storagenode8    71.35%    575.2MiB / 31.25GiB   1.80%     113GB / 98.1GB    3.95TB / 66.5GB   132
b3ecab746d68   storagenode9    4.27%     767.6MiB / 31.25GiB   2.40%     313GB / 101GB     2.16TB / 202GB    154
f99a67a846c9   storagenode10   91.78%    443.6MiB / 31.25GiB   1.39%     160GB / 90.1GB    2.54TB / 94.6GB   186

On this server, CPU load by a single node in a normal situation doesn’t exceed 4-5%, as with storagenode9 in the output above.

After manually restarting the node, it removes the old bloom filter and starts GC for the new one. However, restarting is not an ideal way to fix the problem because some other lazy filewalkers (used-space) will start from scratch.

5 Likes

hm, interesting. My nodes didn’t see a high CPU load

CONTAINER ID   NAME           CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O   PIDS
3d20fef76e67   storagenode5   0.05%     88.22MiB / 24.81GiB   0.35%     1.08GB / 11GB     0B / 0B     35
9fb28e5cf48a   storagenode2   3.28%     452.4MiB / 24.81GiB   1.78%     431GB / 102GB     0B / 0B     292

And only one still has a bloom filter in the cache, but only for one satellite.

2024-04-27T09:45:33Z    INFO    lazyfilewalker.gc-filewalker.subprocess gc-filewalker started   {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "bloomFilterSize": 4099953, "Process": "storagenode", "createdBefore": "2024-04-22T17:59:59Z"}
ls X:\storagenode2\retain\


    Directory: X:\storagenode2\retain


Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
-a----         4/27/2024  12:45 PM        4099953 ukfu6bhbboxilvt7jrwlqk7y2tapb5d2r2tsmj2sjxvw5qaaaaaa-1714067999999645000

Did your node receive a new bloom filter from the same satellite while still processing an old bloom? If not, then the state of your node doesn’t have any relation to the described problem.

I have checked my nodes and can confirm.
8 of 9 nodes with ~100% CPU load.
StorjCPU
Storjnmon
All of them have had 2 or 3 files in retain directory, the node with low CPU load had only 1 file.
After manual restart all node run with low CPU, but still keep 2-3 files in retain directory.

1 Like

One of my nodes, suddenly got 100% CPU, i suspect it was because he got 2nd bloom filter, and 1st one was still in process, since then its 100%,
now all my nodes got connection reset by ISP in night, and that one is just sitting trying to stop storagenode.exe process like that:

2024-05-04T03:55:02+02:00	ERROR	piecestore	upload failed	{"Piece ID": "I7PEKGXGYFM6U76P5EY2OGGKL3BTPOTDDVHDKK4KSIWDYTLQJNJA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.201.210:44170", "Size": 1703936, "error": "manager closed: read tcp 10.2.0.2:37001->79.127.201.210:44170: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:37001->79.127.201.210:44170: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-05-04T03:55:02+02:00	ERROR	piecestore	upload failed	{"Piece ID": "IDPHERXRLFV3F6XCUYG33Q4RSZZJX257IK5M2HZRRSYRB4MS5PZA", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.101:59564", "Size": 131072, "error": "manager closed: read tcp 10.2.0.2:37001->79.127.226.101:59564: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:37001->79.127.226.101:59564: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-05-04T03:55:02+02:00	ERROR	piecestore	upload failed	{"Piece ID": "2OTAH2G7IEVQBEL3YMGL77AI26MZ5JWNBC3BSHT4XAIXDZXURYGQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "109.61.92.69:58916", "Size": 65536, "error": "manager closed: read tcp 10.2.0.2:37001->109.61.92.69:58916: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:37001->109.61.92.69:58916: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-05-04T03:55:02+02:00	ERROR	piecestore	upload failed	{"Piece ID": "FQ5W77J7HXF4TFETTFLF5RDQKH6LSX4VHBUSEKY5DQRSOTZMIWWQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "5.161.77.10:43520", "Size": 1179648, "error": "manager closed: read tcp 10.2.0.2:37001->5.161.77.10:43520: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:37001->5.161.77.10:43520: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-05-04T03:55:02+02:00	INFO	piecestore	upload canceled (race lost or node shutdown)	{"Piece ID": "4N7NVDLTVYIM3FR56FJCWZPSLWNHXXLRTT6ZJUPK3VODFU3A2XBQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "5.161.77.10:24210"}
2024-05-04T03:55:02+02:00	INFO	piecestore	upload canceled (race lost or node shutdown)	{"Piece ID": "CG3NBHYPY4O3NGM2I7NAGDM6RMJFIGPWMGV7ACH7G4KBXPBF3DDA", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "5.161.220.231:21866"}
2024-05-04T03:55:02+02:00	ERROR	piecestore	upload failed	{"Piece ID": "X42SD65LURUMG3ZF5DDKN7CLKENHYSCIDRBIT3IEXWOWIOKHOZMQ", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Action": "PUT", "Remote Address": "79.127.226.101:59964", "Size": 262144, "error": "manager closed: read tcp 10.2.0.2:37001->79.127.226.101:59964: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:37001->79.127.226.101:59964: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-05-04T03:55:02+02:00	ERROR	piecestore	upload failed	{"Piece ID": "74AGVM3EIYYWSNRGSRU3BM5GBWUUTHUBFP4Z7FU7V3HJZE6IXQUQ", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Action": "PUT", "Remote Address": "79.127.205.231:49944", "Size": 65536, "error": "manager closed: read tcp 10.2.0.2:37001->79.127.205.231:49944: wsarecv: A socket operation was attempted to an unreachable network.", "errorVerbose": "manager closed: read tcp 10.2.0.2:37001->79.127.205.231:49944: wsarecv: A socket operation was attempted to an unreachable network.\n\tgithub.com/jtolio/noiseconn.(*Conn).readMsg:211\n\tgithub.com/jtolio/noiseconn.(*Conn).Read:171\n\tstorj.io/drpc/drpcwire.(*Reader).read:68\n\tstorj.io/drpc/drpcwire.(*Reader).ReadPacketUsing:113\n\tstorj.io/drpc/drpcmanager.(*Manager).manageReader:229"}
2024-05-04T03:57:54+02:00	INFO	Interrogate request received.
2024-05-04T03:57:54+02:00	INFO	Stop/Shutdown request received.
2024-05-04T03:57:55+02:00	ERROR	filewalker	failed to reset progress in database	{"error": "gc_filewalker_progress_db: context canceled", "errorVerbose": "gc_filewalker_progress_db: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*gcFilewalkerProgressDB).Reset:58\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash.func1:171\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash:244\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:585\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:369\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:258\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-05-04T03:57:55+02:00	ERROR	retain	retain pieces failed	{"cachePath": "C:\\Program Files\\Storj\\Storage Node/retain", "error": "retain: filewalker: context canceled", "errorVerbose": "retain: filewalker: context canceled\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces:74\n\tstorj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash:178\n\tstorj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash:585\n\tstorj.io/storj/storagenode/retain.(*Service).retainPieces:369\n\tstorj.io/storj/storagenode/retain.(*Service).Run.func2:258\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-05-04T03:58:10+02:00	WARN	services	service takes long to shutdown	{"name": "retain"}
2024-05-04T03:58:10+02:00	WARN	servers	service takes long to shutdown	{"name": "server"}
2024-05-04T03:58:10+02:00	INFO	servers	slow shutdown	{"stack": "goroutine 1022 [running]:\nstorj.io/storj/private/lifecycle.(*Group).logStackTrace.func1()\n\t/go/src/storj.io/storj/private/lifecycle/group.go:107 +0x71\nsync.(*Once).doSlow(0xc00069b110?, 0xc0042f1800?)\n\t/usr/local/go/src/sync/once.go:74 +0xbf\nsync.(*Once).Do(...)\n\t/usr/local/go/src/sync/once.go:65\nstorj.io/storj/private/lifecycle.(*Group).logStackTrace(0xc000507eb8?)\n\t/go/src/storj.io/storj/private/lifecycle/group.go:104 +0x3c\nstorj.io/storj/private/lifecycle.(*Group).Run.func1({0x2134ab0?, 0xc0001d6060?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:77 +0x2a5\nruntime/pprof.Do({0x2134d18?, 0xc00034c320?}, {{0xc00034a960?, 0xc000507fd0?, 0x166c6c5?}}, 0xc0006dd9c0)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\ncreated by storj.io/storj/private/lifecycle.(*Group).Run in goroutine 7\n\t/go/src/storj.io/storj/private/lifecycle/group.go:64 +0x509\n\ngoroutine 1 [syscall, locked to thread]:\nsyscall.SyscallN(0x7fff72f95f10?, {0xc00009fc28?, 0x3?, 0x0?})\n\t/usr/local/go/src/runtime/syscall_windows.go:544 +0x107\nsyscall.Syscall(0x1b4b2a0?, 0x60?, 0xc?, 0xc00009fc88?, 0x14a8899?)\n\t/usr/local/go/src/runtime/syscall_windows.go:482 +0x35\ngolang.org/x/sys/windows.StartServiceCtrlDispatcher(0x1dc1598?)\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/zsyscall_windows.go:1341 +0x4f\ngolang.org/x/sys/windows/svc.Run({0x1dc1598?, 0xb?}, {0x21278e0?, 0xc0000087c8?})\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/svc/service.go:295 +0x131\nmain.startAsService(0xc000004300)\n\t/go/src/storj.io/storj/cmd/storagenode/service_windows.go:62 +0x32d\nmain.main()\n\t/go/src/storj.io/storj/cmd/storagenode/main.go:26 +0x235\n\ngoroutine 17 [select, locked to thread]:\ngolang.org/x/sys/windows/svc.serviceMain(0x1, 0x2e11faea1b8)\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/svc/service.go:247 +0x3a6\n\ngoroutine 6 [semacquire]:\nsync.runtime_Semacquire(0xc0002039d8?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0xf89b37?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc000184e80)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nmain.(*service).Execute.func2()\n\t/go/src/storj.io/storj/cmd/storagenode/service_windows.go:110 +0x17\nmain.(*service).Execute(0xc0000087c8, {0x0?, 0x0?, 0x0?}, 0x0?, 0x0?)\n\t/go/src/storj.io/storj/cmd/storagenode/service_windows.go:142 +0x553\ngolang.org/x/sys/windows/svc.serviceMain.func2()\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/svc/service.go:235 +0x43\ncreated by golang.org/x/sys/windows/svc.serviceMain in goroutine 17\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/svc/service.go:234 +0x25c\n\ngoroutine 7 [semacquire, 12593 minutes]:\nsync.runtime_Semacquire(0x1?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0xc0002acbd0?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc0006dd8c0)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nstorj.io/storj/storagenode.(*Peer).Run(0xc0000d8580, {0x2134ae8, 0xc0006fb680})\n\t/go/src/storj.io/storj/storagenode/peer.go:957 +0x42b\nmain.cmdRun(0x2e164e920a8?, 0xc0001cc000)\n\t/go/src/storj.io/storj/cmd/storagenode/cmd_run.go:123 +0xd65\nmain.newRunCmd.func1(0x1c35f40?, {0xc00017c210?, 0xc000005200?, 0xfc99a0?})\n\t/go/src/storj.io/storj/cmd/storagenode/cmd_run.go:33 +0x17\nstorj.io/common/process.cleanup.func1.4({0x2134d18?, 0xc000227ae0})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:393 +0x149\nstorj.io/common/process.cleanup.func1(0xc000005200, {0xc0001ce860, 0x0, 0x2})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:411 +0x1c88\ngithub.com/spf13/cobra.(*Command).execute(0xc000005200, {0xc0001ce840, 0x2, 0x2})\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:983 +0xabc\ngithub.com/spf13/cobra.(*Command).ExecuteC(0xc000004300)\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:1115 +0x3ff\ngithub.com/spf13/cobra.(*Command).Execute(...)\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:1039\nstorj.io/common/process.ExecWithCustomOptions(0xc000004300, {0x1, 0x1, 0x1, 0x0, 0x2022258, 0x0})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:112 +0x1c9\nstorj.io/common/process.ExecWithCustomConfigAndLogger(...)\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:77\nstorj.io/common/process.ExecWithCustomConfig(...)\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:72\nstorj.io/common/process.Exec(0x0?)\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:62 +0x5a\nmain.(*service).Execute.func1()\n\t/go/src/storj.io/storj/cmd/storagenode/service_windows.go:107 +0x57\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 6\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 9 [select]:\nstorj.io/monkit-jaeger.(*ThriftCollector).Run(0xc0000f90e0, {0x2134ae8, 0xc000224410})\n\t/go/pkg/mod/storj.io/monkit-jaeger@v0.0.0-20240221095020-52b0792fa6cd/thrift.go:174 +0x2cf\nstorj.io/common/process.cleanup.func1.2()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:351 +0x1f\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 7\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 10 [semacquire, 12593 minutes]:\nsync.runtime_Semacquire(0xc0000095a8?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0x1ce5fc0?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc0001845c0)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nstorj.io/common/debug.(*Server).Run(0xc00021a000, {0x2134a78, 0x293f080})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/debug/server.go:203 +0x33c\nstorj.io/common/process.initDebug.func1()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/debug.go:40 +0xf4\ncreated by storj.io/common/process.initDebug in goroutine 7\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/debug.go:37 +0xf0\n\ngoroutine 11 [syscall, 12593 minutes]:\nos/signal.signal_recv()\n\t/usr/local/go/src/runtime/sigqueue.go:152 +0x29\nos/signal.loop()\n\t/usr/local/go/src/os/signal/signal_unix.go:23 +0x13\ncreated by os/signal.Notify.func1.1 in goroutine 7\n\t/usr/local/go/src/os/signal/signal.go:151 +0x1f\n\ngoroutine 12 [chan receive, 12593 minutes]:\nstorj.io/common/process.Ctx.func1()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:139 +0x39\ncreated by storj.io/common/process.Ctx in goroutine 7\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:138 +0x2c7\n\ngoroutine 186 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000189450, {0x2134ae8, 0xc0002d0e60})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 15 [chan receive, 12593 minutes]:\nstorj.io/common/debug.(*Server).Run.func3()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/debug/server.go:182 +0x35\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 10\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 16 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).Run(0xc0001823f0, {0x2134ae8?, 0xc000224550?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:90 +0x12e\nstorj.io/common/debug.(*Server).Run.func4()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/debug/server.go:186 +0x31\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 10\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 19 [select, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*listener).Accept(0xc000184580)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/listener.go:37 +0x7e\nnet/http.(*Server).Serve(0xc00021a040, {0x212f150, 0xc000184580})\n\t/usr/local/go/src/net/http/server.go:3056 +0x364\nstorj.io/common/debug.(*Server).Run.func5()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/debug/server.go:195 +0x53\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 10\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 20 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).monitorContext(0xc0001823f0, {0x2134ae8?, 0xc000224640?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:106 +0x2d\ncreated by storj.io/drpc/drpcmigrate.(*ListenMux).Run in goroutine 16\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:87 +0xd6\n\ngoroutine 21 [IO wait, 12593 minutes]:\ninternal/poll.runtime_pollWait(0x2e16510bc30, 0x72)\n\t/usr/local/go/src/runtime/netpoll.go:343 +0x85\ninternal/poll.(*pollDesc).wait(0x0?, 0xc0001e0780?, 0x0)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27\ninternal/poll.execIO(0xc00017b198, 0xc000371d28)\n\t/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6\ninternal/poll.(*FD).acceptOne(0xc00017b180, 0x208, {0xc00022c000?, 0xc0001c9f40?, 0x1cf2860?}, 0xc0001c9540?)\n\t/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67\ninternal/poll.(*FD).Accept(0xc00017b180, 0xc000371f00)\n\t/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc\nnet.(*netFD).accept(0xc00017b180)\n\t/usr/local/go/src/net/fd_windows.go:166 +0x54\nnet.(*TCPListener).accept(0xc00034bec0)\n\t/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e\nnet.(*TCPListener).Accept(0xc00034bec0)\n\t/usr/local/go/src/net/tcpsock.go:315 +0x30\nstorj.io/drpc/drpcmigrate.(*ListenMux).monitorBase(0xc0001823f0)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:115 +0x3a\ncreated by storj.io/drpc/drpcmigrate.(*ListenMux).Run in goroutine 16\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:88 +0x11b\n\ngoroutine 1070 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).Run(0xc00014de30, {0x2134ae8?, 0xc0006fbf40?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:90 +0x12e\nstorj.io/storj/private/server.(*Server).Run.func2()\n\t/go/src/storj.io/storj/private/server/server.go:399 +0x1f\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1023\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 188 [select, 5091 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc0001895f0, {0x2134ae8, 0xc0002d0f00})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 187 [select, 8 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc000189450, 0x1dc06a9?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 190 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000189860, {0x2134ae8, 0xc0002d0fa0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 189 [select, 7 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc0001895f0, 0x0?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 192 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00023e5b0, {0x2134ae8, 0xc0002d1040})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 194 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000048d00, {0x2134ae8, 0xc0002d10e0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 196 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000049110, {0x2134ae8, 0xc0002d1180})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 198 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e000, {0x2134ae8, 0xc0002d1220})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 200 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e1a0, {0x2134ae8, 0xc0002d12c0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 201 [select, 19 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc00037e1a0, 0x0?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 377 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037eb60, {0x2134ae8, 0xc000578a50})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 1072 [semacquire]:\nsync.runtime_Semacquire(0x2127a80?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0x15f38b3?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\nstorj.io/drpc/drpcctx.(*Tracker).Wait(0x2138f01?)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcctx/tracker.go:43 +0x19\nstorj.io/drpc/drpcserver.(*Server).Serve(0xc00015cfa0, {0x2134ae8, 0xc0006fbf90}, {0x212f180?, 0xc0002acf90})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcserver/server.go:133 +0x41f\nstorj.io/storj/private/server.(*Server).Run.(*Server).Run.func4.func9()\n\t/go/src/storj.io/storj/private/server/server.go:422 +0x4f\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1023\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 1038 [runnable]:\ngo.uber.org/zap/buffer.(*Buffer).AppendString(0xc0027a3620, {0xc00074da93?, 0x35?})\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/buffer/buffer.go:51 +0xdf\ngo.uber.org/zap/zapcore.safeAppendStringLike[...](0x2023370?, 0x20233f8, 0xc0027a3620, {0xc00074c000, 0x7a84})\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/json_encoder.go:555 +0xb1\ngo.uber.org/zap/zapcore.(*jsonEncoder).safeAddString(...)\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/json_encoder.go:489\ngo.uber.org/zap/zapcore.(*jsonEncoder).AppendString(0xc0006dd4c0, {0xc00074c000, 0x7a84})\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/json_encoder.go:295 +0x219\ngo.uber.org/zap/zapcore.(*jsonEncoder).AddString(0x20?, {0x1dba863?, 0xc00027d808?}, {0xc00074c000, 0x7a84})\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/json_encoder.go:197 +0x3b\ngo.uber.org/zap/zapcore.Field.AddTo({{0x1dba863, 0x5}, 0xf, 0x0, {0xc00074c000, 0x7a84}, {0x0, 0x0}}, {0x2140218, 0xc0006dd4c0})\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/field.go:149 +0x466\ngo.uber.org/zap/zapcore.addFields(...)\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/field.go:210\ngo.uber.org/zap/zapcore.consoleEncoder.writeContext({0x2127b60?}, 0xc00034acc0, {0xc0042f17c0, 0x1, 0x1?})\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/console_encoder.go:141 +0x166\ngo.uber.org/zap/zapcore.consoleEncoder.EncodeEntry({0xfaca1b?}, {0x0, {0xc185836c871d90b4, 0x2af35b65ad5dd, 0x28e87e0}, {0x1dbe891, 0x8}, {0x1dc3e94, 0xd}, {0x0, ...}, ...}, ...)\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/console_encoder.go:119 +0x731\ngo.uber.org/zap/zapcore.(*ioCore).Write(0xc00023b650, {0x0, {0xc185836c871d90b4, 0x2af35b65ad5dd, 0x28e87e0}, {0x1dbe891, 0x8}, {0x1dc3e94, 0xd}, {0x0, ...}, ...}, ...)\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/core.go:95 +0x7b\ngo.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc00e12b380, {0xc0042f17c0, 0x1, 0x1})\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/zapcore/entry.go:253 +0x1dc\ngo.uber.org/zap.(*Logger).Info(0xc0011f8000?, {0x1dc3e94?, 0x7a84?}, {0xc0042f17c0, 0x1, 0x1})\n\t/go/pkg/mod/go.uber.org/zap@v1.27.0/logger.go:247 +0x4e\nstorj.io/storj/private/lifecycle.(*Group).logStackTrace.func1()\n\t/go/src/storj.io/storj/private/lifecycle/group.go:114 +0x159\nsync.(*Once).doSlow(0xc00e12b380?, 0xc0042f1780?)\n\t/usr/local/go/src/sync/once.go:74 +0xbf\nsync.(*Once).Do(...)\n\t/usr/local/go/src/sync/once.go:65\nstorj.io/storj/private/lifecycle.(*Group).logStackTrace(0xc00027deb8?)\n\t/go/src/storj.io/storj/private/lifecycle/group.go:104 +0x3c\nstorj.io/storj/private/lifecycle.(*Group).Run.func1({0x2134ab0?, 0xc00037d950?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:77 +0x2a5\nruntime/pprof.Do({0x2134d18?, 0xc00034c3c0?}, {{0xc00034aae0?, 0xc00027dfd0?, 0x1762745?}}, 0xc00016c540)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\ncreated by storj.io/storj/private/lifecycle.(*Group).Run in goroutine 7\n\t/go/src/storj.io/storj/private/lifecycle/group.go:64 +0x509\n\ngoroutine 1039 [semacquire, 12593 minutes]:\nsync.runtime_Semacquire(0xc0002ade30?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0x1cc4ca0?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc00023c170)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nstorj.io/storj/storagenode/retain.(*Service).Run(0xc00023c0f0, {0x2134ab0, 0xc00037d9b0})\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:274 +0x52f\nstorj.io/storj/private/lifecycle.(*Group).Run.func2.1({0x2134ab0?, 0xc00037d9b0?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:87 +0x2a\nruntime/pprof.Do({0x2134d18?, 0xc00034c3c0?}, {{0xc00034bf00?, 0x58?, 0x1d6ea00?}}, 0xc00027fe70)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\nstorj.io/storj/private/lifecycle.(*Group).Run.func2()\n\t/go/src/storj.io/storj/private/lifecycle/group.go:86 +0x285\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 7\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 67 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000188000, {0x2134ae8, 0xc0000780a0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 323 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e820, {0x2134ae8, 0xc000549950})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 991 [sleep, 3 minutes]:\ntime.Sleep(0x8bb2c97000)\n\t/usr/local/go/src/runtime/time.go:195 +0x126\ngithub.com/spacemonkeygo/monkit/v3.(*ticker).run(0x28e7d60)\n\t/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/meter.go:203 +0x26\ncreated by github.com/spacemonkeygo/monkit/v3.(*ticker).register in goroutine 990\n\t/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/meter.go:195 +0x89\n\ngoroutine 1023 [semacquire, 12593 minutes]:\nsync.runtime_Semacquire(0xc0002ad0f8?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0x1ce3f80?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc00016ce80)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nstorj.io/storj/private/server.(*Server).Run(0xc000088c80, {0x2134ab0, 0xc0001d60c0})\n\t/go/src/storj.io/storj/private/server/server.go:483 +0x130a\nstorj.io/storj/storagenode.New.func1({0x2134ab0, 0xc0001d60c0})\n\t/go/src/storj.io/storj/storagenode/peer.go:415 +0x1a5\nstorj.io/storj/private/lifecycle.(*Group).Run.func2.1({0x2134ab0?, 0xc0001d60c0?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:87 +0x2a\nruntime/pprof.Do({0x2134d18?, 0xc00034c320?}, {{0xc00034ae20?, 0xc0006d35a0?, 0x1670fe0?}}, 0xc00052be70)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\nstorj.io/storj/private/lifecycle.(*Group).Run.func2()\n\t/go/src/storj.io/storj/private/lifecycle/group.go:86 +0x285\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 7\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 227 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e340, {0x2134ae8, 0xc0002d1b30})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 266 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e680, {0x2134ae8, 0xc000548780})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 370 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e9c0, {0x2134ae8, 0xc000578820})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 1068 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).Run(0xc00014ddc0, {0x2134ae8?, 0xc0006fbf40?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:90 +0x12e\nstorj.io/storj/private/server.(*Server).Run.func1()\n\t/go/src/storj.io/storj/private/server/server.go:389 +0x1f\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1023\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 249 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e4e0, {0x2134ae8, 0xc000548230})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 185 [select, 3 minutes]:\ndatabase/sql.(......)
<one more missing in next posrt coz  too much chars per peost>

Means its 9h later, and the node is “stopping” in windows services, and no new logs, and still 100% CPU for storagenode process.

The only way to close it now is to reset the windows, should i reset or You wnat me to make some experiments?
unfortunately, localhost:14002 dashboard shows nothing as its offline, and this node donest have debug:5998 set up, so either cant get there to see processes atm

Edit: okay, nvm, i ended the task, and restarted the node, to get it Online

1 Like

continuation of log: (OMG still too long 43843 chars… well i will cut so it fit)

(.....)(*DB).connectionCleaner(0xc000188270, 0xc0002c9600?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 184 [select, 52 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000188270, {0x2134ae8, 0xc0002d0dc0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 1105 [sync.Mutex.Lock]:\nsync.runtime_SemacquireMutex(0xc00053dc80?, 0x20?, 0xc00053dc80?)\n\t/usr/local/go/src/runtime/sema.go:77 +0x25\nsync.(*Mutex).lockSlow(0xc00017fb10)\n\t/usr/local/go/src/sync/mutex.go:171 +0x15d\nsync.(*Mutex).Lock(0xc00027ff38?)\n\t/usr/local/go/src/sync/mutex.go:90 +0x32\nstorj.io/storj/storagenode/retain.(*Service).Run.func1()\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:217 +0x102\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1039\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 1106 [sync.Mutex.Lock]:\nsync.runtime_SemacquireMutex(0x0?, 0x0?, 0x0?)\n\t/usr/local/go/src/runtime/sema.go:77 +0x25\nsync.(*Mutex).lockSlow(0xc00017fb10)\n\t/usr/local/go/src/sync/mutex.go:171 +0x15d\nsync.(*Mutex).Lock(0x85ae0be1044f8ba2?)\n\t/usr/local/go/src/sync/mutex.go:90 +0x32\nstorj.io/storj/storagenode/retain.(*Service).Run.func2()\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:265 +0x38c\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1039\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 1107 [sync.Mutex.Lock, 1140 minutes]:\nsync.runtime_SemacquireMutex(0xc00004e000?, 0x40?, 0xfee567?)\n\t/usr/local/go/src/runtime/sema.go:77 +0x25\nsync.(*Mutex).lockSlow(0xc00017fb10)\n\t/usr/local/go/src/sync/mutex.go:171 +0x15d\nsync.(*Mutex).Lock(0xc00023c130?)\n\t/usr/local/go/src/sync/mutex.go:90 +0x32\nsync.(*Cond).Wait(0x0?)\n\t/usr/local/go/src/sync/cond.go:71 +0x97\nstorj.io/storj/storagenode/retain.(*Service).Run.func2()\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:246 +0x3e9\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1039\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 1108 [sync.Mutex.Lock, 1140 minutes]:\nsync.runtime_SemacquireMutex(0xc00004e000?, 0x40?, 0xfee567?)\n\t/usr/local/go/src/runtime/sema.go:77 +0x25\nsync.(*Mutex).lockSlow(0xc00017fb10)\n\t/usr/local/go/src/sync/mutex.go:171 +0x15d\nsync.(*Mutex).Lock(0xc00023c130?)\n\t/usr/local/go/src/sync/mutex.go:90 +0x32\nsync.(*Cond).Wait(0x0?)\n\t/usr/local/go/src/sync/cond.go:71 +0x97\nstorj.io/storj/storagenode/retain.(*Service).Run.func2()\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:246 +0x3e9\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1039\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 1109 [sync.Mutex.Lock, 1140 minutes]:\nsync.runtime_SemacquireMutex(0xc00004e000?, 0x40?, 0xfee567?)\n\t/usr/local/go/src/runtime/sema.go:77 +0x25\nsync.(*Mutex).lockSlow(0xc00017fb10)\n\t/usr/local/go/src/sync/mutex.go:171 +0x15d\nsync.(*Mutex).Lock(0xc00023c130?)\n\t/usr/local/go/src/sync/mutex.go:90 +0x32\nsync.(*Cond).Wait(0x0?)\n\t/usr/local/go/src/sync/cond.go:71 +0x97\nstorj.io/storj/storagenode/retain.(*Service).Run.func2()\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:246 +0x3e9\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1039\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 1110 [runnable]:\nstorj.io/storj/storagenode/retain.(*Service).next(0xc00023c0f0)\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:295 +0x111\nstorj.io/storj/storagenode/retain.(*Service).Run.func2()\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:242 +0x148\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1039\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 1139 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).monitorContext(0xc00014ddc0, {0x2134ae8?, 0xc000225900?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:106 +0x2d\ncreated by storj.io/drpc/drpcmigrate.(*ListenMux).Run in goroutine 1068\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:87 +0xd6\n\ngoroutine 1140 [IO wait, 3 minutes]:\ninternal/poll.runtime_pollWait(0x2e16510b948, 0x72)\n\t/usr/local/go/src/runtime/netpoll.go:343 +0x85\ninternal/poll.(*pollDesc).wait(0x0?, 0x0?, 0x0)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27\ninternal/poll.execIO(0xc000089b98, 0xc00063bcb8)\n\t/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6\ninternal/poll.(*FD).acceptOne(0xc000089b80, 0x26a8, {0xc00644c870?, 0xc014077400?, 0x20228d0?}, 0x0?)\n\t/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67\ninternal/poll.(*FD).Accept(0xc000089b80, 0xc00063be90)\n\t/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc\nnet.(*netFD).accept(0xc000089b80)\n\t/usr/local/go/src/net/fd_windows.go:166 +0x54\nnet.(*TCPListener).accept(0xc00034a2e0)\n\t/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e\nnet.(*TCPListener).AcceptTCP(0xc00034a2e0)\n\t/usr/local/go/src/net/tcpsock.go:302 +0x30\nstorj.io/storj/private/server.(*tcpUserTimeoutListener).Accept(0x0?)\n\t/go/src/storj.io/storj/private/server/listener.go:46 +0x1a\nstorj.io/drpc/drpcmigrate.(*ListenMux).monitorBase(0xc00014ddc0)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:115 +0x3a\ncreated by storj.io/drpc/drpcmigrate.(*ListenMux).Run in goroutine 1068\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:88 +0x11b\n\ngoroutine 1141 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).monitorContext(0xc00014de30, {0x2134ae8?, 0xc000225950?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:106 +0x2d\ncreated by storj.io/drpc/drpcmigrate.(*ListenMux).Run in goroutine 1070\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:87 +0xd6\n\ngoroutine 1142 [IO wait, 12593 minutes]:\ninternal/poll.runtime_pollWait(0x2e16510b758, 0x72)\n\t/usr/local/go/src/runtime/netpoll.go:343 +0x85\ninternal/poll.(*pollDesc).wait(0x732f646d632f6a72?, 0x6f6e656761726f74?, 0x0)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27\ninternal/poll.execIO(0xc00014a298, 0xc000637cb8)\n\t/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6\ninternal/poll.(*FD).acceptOne(0xc00014a280, 0x5e4, {0xc00022ce10?, 0x31636e75662e7075?, 0x3030303063783028?}, 0x7b202c3030323530?)\n\t/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67\ninternal/poll.(*FD).Accept(0xc00014a280, 0xc000637e90)\n\t/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc\nnet.(*netFD).accept(0xc00014a280)\n\t/usr/local/go/src/net/fd_windows.go:166 +0x54\nnet.(*TCPListener).accept(0xc00034a320)\n\t/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e\nnet.(*TCPListener).AcceptTCP(0xc00034a320)\n\t/usr/local/go/src/net/tcpsock.go:302 +0x30\nstorj.io/storj/private/server.(*tcpUserTimeoutListener).Accept(0x692e6a726f74732f?)\n\t/go/src/storj.io/storj/private/server/listener.go:46 +0x1a\nstorj.io/drpc/drpcmigrate.(*ListenMux).monitorBase(0xc00014de30)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:115 +0x3a\ncreated by storj.io/drpc/drpcmigrate.(*ListenMux).Run in goroutine 1070\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:88 +0x11b\n\ngoroutine 57313498 [select, 22 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc000189860, 0x1b3f840?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 1053\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 50400526 [select, 23 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc00023e5b0, 0x0?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 1035\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 54236458 [sync.Mutex.Lock, 1049 minutes]:\nsync.runtime_SemacquireMutex(0x0?, 0x0?, 0xc001665698?)\n\t/usr/local/go/src/runtime/sema.go:77 +0x25\nsync.(*Mutex).lockSlow(0xc00017fb10)\n\t/usr/local/go/src/sync/mutex.go:171 +0x15d\nsync.(*Mutex).Lock(0x1359677?)\n\t/usr/local/go/src/sync/mutex.go:90 +0x32\nstorj.io/storj/storagenode/retain.(*Service).Queue(0xc00023c0f0, {{0x84, 0xa7, 0x4c, 0x2c, 0xd4, 0x3c, 0x5b, 0xa7, 0x65, ...}, ...})\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:157 +0x4a\nstorj.io/storj/storagenode/piecestore.(*Endpoint).processRetainReq(0xc0000ee000, {0x84, 0xa7, 0x4c, 0x2c, 0xd4, 0x3c, 0x5b, 0xa7, 0x65, ...}, ...)\n\t/go/src/storj.io/storj/storagenode/piecestore/endpoint.go:978 +0x1ff\nstorj.io/storj/storagenode/piecestore.(*Endpoint).Retain(0xc0000ee000, {0x2134d18, 0xc003babb80}, 0x212d510?)\n\t/go/src/storj.io/storj/storagenode/piecestore/endpoint.go:964 +0x3eb\nstorj.io/common/pb.DRPCPiecestoreDescription.Method.func5({0x1d69900?, 0xc0000ee000}, {0x2134d18, 0xc0040bbea0}, {0x1d49fa0?, 0xc00014d960}, {0x2123268?, 0x0?})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/pb/piecestore2_drpc.pb.go:328 +0x72\nstorj.io/drpc/drpcmux.(*Mux).HandleRPC(0xc00167b880?, {0x21357a0, 0xc005a1d1a0}, {0xc015939980, 0x1d})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmux/handle_rpc.go:33 +0x20d\nstorj.io/common/rpc/rpctracing.(*Handler).HandleRPC(0xc000008f48, {0x2135b20, 0xc005a1d160}, {0xc015939980, 0x1d})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/rpc/rpctracing/handler.go:61 +0x2e3\nstorj.io/common/experiment.(*Handler).HandleRPC(0xc0001dd610, {0x2135c60, 0xc017ce2fc0}, {0xc015939980, 0x1d})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/experiment/import.go:42 +0x167\nstorj.io/drpc/drpcserver.(*Server).handleRPC(0xc00a6ca1e0?, 0x2134ab0?, {0xc015939980?, 0x293f080?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcserver/server.go:167 +0x42\nstorj.io/drpc/drpcserver.(*Server).ServeOne(0xc00015cfa0, {0x2135178, 0xc00054bd70}, {0x2e1659784e0?, 0xc00469ee00?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcserver/server.go:109 +0x1e5\nstorj.io/drpc/drpcserver.(*Server).Serve.func2({0x2135178, 0xc00054bd70})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcserver/server.go:157 +0x59\nstorj.io/drpc/drpcctx.(*Tracker).track(0xc00054bd70, 0xc003cfa8b0?)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcctx/tracker.go:35 +0x2e\ncreated by storj.io/drpc/drpcctx.(*Tracker).Run in goroutine 1072\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcctx/tracker.go:30 +0x79\n\ngoroutine 57442242 [select, 12 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc00037e9c0, 0x0?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 1106\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n"}
2024-05-04T03:58:10+02:00	INFO	services	slow shutdown	{"stack": "goroutine 1038 [running]:\nstorj.io/storj/private/lifecycle.(*Group).logStackTrace.func1()\n\t/go/src/storj.io/storj/private/lifecycle/group.go:107 +0x71\nsync.(*Once).doSlow(0xc00e12b380?, 0xc0042f1780?)\n\t/usr/local/go/src/sync/once.go:74 +0xbf\nsync.(*Once).Do(...)\n\t/usr/local/go/src/sync/once.go:65\nstorj.io/storj/private/lifecycle.(*Group).logStackTrace(0xc00027deb8?)\n\t/go/src/storj.io/storj/private/lifecycle/group.go:104 +0x3c\nstorj.io/storj/private/lifecycle.(*Group).Run.func1({0x2134ab0?, 0xc00037d950?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:77 +0x2a5\nruntime/pprof.Do({0x2134d18?, 0xc00034c3c0?}, {{0xc00034aae0?, 0xc00027dfd0?, 0x1762745?}}, 0xc00016c540)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\ncreated by storj.io/storj/private/lifecycle.(*Group).Run in goroutine 7\n\t/go/src/storj.io/storj/private/lifecycle/group.go:64 +0x509\n\ngoroutine 1 [syscall, locked to thread]:\nsyscall.SyscallN(0x7fff72f95f10?, {0xc00009fc28?, 0x3?, 0x0?})\n\t/usr/local/go/src/runtime/syscall_windows.go:544 +0x107\nsyscall.Syscall(0x1b4b2a0?, 0x60?, 0xc?, 0xc00009fc88?, 0x14a8899?)\n\t/usr/local/go/src/runtime/syscall_windows.go:482 +0x35\ngolang.org/x/sys/windows.StartServiceCtrlDispatcher(0x1dc1598?)\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/zsyscall_windows.go:1341 +0x4f\ngolang.org/x/sys/windows/svc.Run({0x1dc1598?, 0xb?}, {0x21278e0?, 0xc0000087c8?})\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/svc/service.go:295 +0x131\nmain.startAsService(0xc000004300)\n\t/go/src/storj.io/storj/cmd/storagenode/service_windows.go:62 +0x32d\nmain.main()\n\t/go/src/storj.io/storj/cmd/storagenode/main.go:26 +0x235\n\ngoroutine 17 [select, locked to thread]:\ngolang.org/x/sys/windows/svc.serviceMain(0x1, 0x2e11faea1b8)\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/svc/service.go:247 +0x3a6\n\ngoroutine 6 [semacquire]:\nsync.runtime_Semacquire(0xc0002039d8?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0xf89b37?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc000184e80)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nmain.(*service).Execute.func2()\n\t/go/src/storj.io/storj/cmd/storagenode/service_windows.go:110 +0x17\nmain.(*service).Execute(0xc0000087c8, {0x0?, 0x0?, 0x0?}, 0x0?, 0x0?)\n\t/go/src/storj.io/storj/cmd/storagenode/service_windows.go:142 +0x553\ngolang.org/x/sys/windows/svc.serviceMain.func2()\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/svc/service.go:235 +0x43\ncreated by golang.org/x/sys/windows/svc.serviceMain in goroutine 17\n\t/go/pkg/mod/golang.org/x/sys@v0.17.0/windows/svc/service.go:234 +0x25c\n\ngoroutine 7 [semacquire, 12593 minutes]:\nsync.runtime_Semacquire(0x1?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0xc0002acbd0?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc0006dd8c0)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nstorj.io/storj/storagenode.(*Peer).Run(0xc0000d8580, {0x2134ae8, 0xc0006fb680})\n\t/go/src/storj.io/storj/storagenode/peer.go:957 +0x42b\nmain.cmdRun(0x2e164e920a8?, 0xc0001cc000)\n\t/go/src/storj.io/storj/cmd/storagenode/cmd_run.go:123 +0xd65\nmain.newRunCmd.func1(0x1c35f40?, {0xc00017c210?, 0xc000005200?, 0xfc99a0?})\n\t/go/src/storj.io/storj/cmd/storagenode/cmd_run.go:33 +0x17\nstorj.io/common/process.cleanup.func1.4({0x2134d18?, 0xc000227ae0})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:393 +0x149\nstorj.io/common/process.cleanup.func1(0xc000005200, {0xc0001ce860, 0x0, 0x2})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:411 +0x1c88\ngithub.com/spf13/cobra.(*Command).execute(0xc000005200, {0xc0001ce840, 0x2, 0x2})\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:983 +0xabc\ngithub.com/spf13/cobra.(*Command).ExecuteC(0xc000004300)\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:1115 +0x3ff\ngithub.com/spf13/cobra.(*Command).Execute(...)\n\t/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:1039\nstorj.io/common/process.ExecWithCustomOptions(0xc000004300, {0x1, 0x1, 0x1, 0x0, 0x2022258, 0x0})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:112 +0x1c9\nstorj.io/common/process.ExecWithCustomConfigAndLogger(...)\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:77\nstorj.io/common/process.ExecWithCustomConfig(...)\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:72\nstorj.io/common/process.Exec(0x0?)\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:62 +0x5a\nmain.(*service).Execute.func1()\n\t/go/src/storj.io/storj/cmd/storagenode/service_windows.go:107 +0x57\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 6\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 9 [select]:\nstorj.io/monkit-jaeger.(*ThriftCollector).Run(0xc0000f90e0, {0x2134ae8, 0xc000224410})\n\t/go/pkg/mod/storj.io/monkit-jaeger@v0.0.0-20240221095020-52b0792fa6cd/thrift.go:174 +0x2cf\nstorj.io/common/process.cleanup.func1.2()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:351 +0x1f\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 7\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 10 [semacquire, 12593 minutes]:\nsync.runtime_Semacquire(0xc0000095a8?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0x1ce5fc0?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc0001845c0)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nstorj.io/common/debug.(*Server).Run(0xc00021a000, {0x2134a78, 0x293f080})\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/debug/server.go:203 +0x33c\nstorj.io/common/process.initDebug.func1()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/debug.go:40 +0xf4\ncreated by storj.io/common/process.initDebug in goroutine 7\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/debug.go:37 +0xf0\n\ngoroutine 11 [syscall, 12593 minutes]:\nos/signal.signal_recv()\n\t/usr/local/go/src/runtime/sigqueue.go:152 +0x29\nos/signal.loop()\n\t/usr/local/go/src/os/signal/signal_unix.go:23 +0x13\ncreated by os/signal.Notify.func1.1 in goroutine 7\n\t/usr/local/go/src/os/signal/signal.go:151 +0x1f\n\ngoroutine 12 [chan receive, 12593 minutes]:\nstorj.io/common/process.Ctx.func1()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:139 +0x39\ncreated by storj.io/common/process.Ctx in goroutine 7\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/process/exec_conf.go:138 +0x2c7\n\ngoroutine 186 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000189450, {0x2134ae8, 0xc0002d0e60})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 15 [chan receive, 12593 minutes]:\nstorj.io/common/debug.(*Server).Run.func3()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/debug/server.go:182 +0x35\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 10\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 16 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).Run(0xc0001823f0, {0x2134ae8?, 0xc000224550?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:90 +0x12e\nstorj.io/common/debug.(*Server).Run.func4()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/debug/server.go:186 +0x31\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 10\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 19 [select, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*listener).Accept(0xc000184580)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/listener.go:37 +0x7e\nnet/http.(*Server).Serve(0xc00021a040, {0x212f150, 0xc000184580})\n\t/usr/local/go/src/net/http/server.go:3056 +0x364\nstorj.io/common/debug.(*Server).Run.func5()\n\t/go/pkg/mod/storj.io/common@v0.0.0-20240325183111-3a1a80390ccd/debug/server.go:195 +0x53\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 10\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 20 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).monitorContext(0xc0001823f0, {0x2134ae8?, 0xc000224640?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:106 +0x2d\ncreated by storj.io/drpc/drpcmigrate.(*ListenMux).Run in goroutine 16\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:87 +0xd6\n\ngoroutine 21 [IO wait, 12593 minutes]:\ninternal/poll.runtime_pollWait(0x2e16510bc30, 0x72)\n\t/usr/local/go/src/runtime/netpoll.go:343 +0x85\ninternal/poll.(*pollDesc).wait(0x0?, 0xc0001e0780?, 0x0)\n\t/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27\ninternal/poll.execIO(0xc00017b198, 0xc000371d28)\n\t/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6\ninternal/poll.(*FD).acceptOne(0xc00017b180, 0x208, {0xc00022c000?, 0xc0001c9f40?, 0x1cf2860?}, 0xc0001c9540?)\n\t/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67\ninternal/poll.(*FD).Accept(0xc00017b180, 0xc000371f00)\n\t/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc\nnet.(*netFD).accept(0xc00017b180)\n\t/usr/local/go/src/net/fd_windows.go:166 +0x54\nnet.(*TCPListener).accept(0xc00034bec0)\n\t/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e\nnet.(*TCPListener).Accept(0xc00034bec0)\n\t/usr/local/go/src/net/tcpsock.go:315 +0x30\nstorj.io/drpc/drpcmigrate.(*ListenMux).monitorBase(0xc0001823f0)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:115 +0x3a\ncreated by storj.io/drpc/drpcmigrate.(*ListenMux).Run in goroutine 16\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:88 +0x11b\n\ngoroutine 1070 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).Run(0xc00014de30, {0x2134ae8?, 0xc0006fbf40?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:90 +0x12e\nstorj.io/storj/private/server.(*Server).Run.func2()\n\t/go/src/storj.io/storj/private/server/server.go:399 +0x1f\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1023\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 188 [select, 5091 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc0001895f0, {0x2134ae8, 0xc0002d0f00})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 187 [select, 8 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc000189450, 0x1dc06a9?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 190 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000189860, {0x2134ae8, 0xc0002d0fa0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 189 [select, 7 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc0001895f0, 0x0?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 192 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00023e5b0, {0x2134ae8, 0xc0002d1040})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 194 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000048d00, {0x2134ae8, 0xc0002d10e0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 196 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000049110, {0x2134ae8, 0xc0002d1180})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 198 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e000, {0x2134ae8, 0xc0002d1220})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 200 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e1a0, {0x2134ae8, 0xc0002d12c0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 201 [select, 19 minutes]:\ndatabase/sql.(*DB).connectionCleaner(0xc00037e1a0, 0x0?)\n\t/usr/local/go/src/database/sql/sql.go:1061 +0x9c\ncreated by database/sql.(*DB).startCleanerLocked in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:1048 +0x105\n\ngoroutine 377 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037eb60, {0x2134ae8, 0xc000578a50})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 1072 [semacquire]:\nsync.runtime_Semacquire(0x2127a80?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0x15f38b3?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\nstorj.io/drpc/drpcctx.(*Tracker).Wait(0x2138f01?)\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcctx/tracker.go:43 +0x19\nstorj.io/drpc/drpcserver.(*Server).Serve(0xc00015cfa0, {0x2134ae8, 0xc0006fbf90}, {0x212f180?, 0xc0002acf90})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcserver/server.go:133 +0x41f\nstorj.io/storj/private/server.(*Server).Run.(*Server).Run.func4.func9()\n\t/go/src/storj.io/storj/private/server/server.go:422 +0x4f\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1023\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 1039 [semacquire, 12593 minutes]:\nsync.runtime_Semacquire(0xc0002ade30?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0x1cc4ca0?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc00023c170)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nstorj.io/storj/storagenode/retain.(*Service).Run(0xc00023c0f0, {0x2134ab0, 0xc00037d9b0})\n\t/go/src/storj.io/storj/storagenode/retain/retain.go:274 +0x52f\nstorj.io/storj/private/lifecycle.(*Group).Run.func2.1({0x2134ab0?, 0xc00037d9b0?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:87 +0x2a\nruntime/pprof.Do({0x2134d18?, 0xc00034c3c0?}, {{0xc00034bf00?, 0x58?, 0x1d6ea00?}}, 0xc00027fe70)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\nstorj.io/storj/private/lifecycle.(*Group).Run.func2()\n\t/go/src/storj.io/storj/private/lifecycle/group.go:86 +0x285\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 7\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 67 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc000188000, {0x2134ae8, 0xc0000780a0})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 323 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e820, {0x2134ae8, 0xc000549950})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 991 [sleep, 3 minutes]:\ntime.Sleep(0x8bb2c97000)\n\t/usr/local/go/src/runtime/time.go:195 +0x126\ngithub.com/spacemonkeygo/monkit/v3.(*ticker).run(0x28e7d60)\n\t/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/meter.go:203 +0x26\ncreated by github.com/spacemonkeygo/monkit/v3.(*ticker).register in goroutine 990\n\t/go/pkg/mod/github.com/spacemonkeygo/monkit/v3@v3.0.22/meter.go:195 +0x89\n\ngoroutine 1023 [semacquire, 12593 minutes]:\nsync.runtime_Semacquire(0xc0002ad0f8?)\n\t/usr/local/go/src/runtime/sema.go:62 +0x25\nsync.(*WaitGroup).Wait(0x1ce3f80?)\n\t/usr/local/go/src/sync/waitgroup.go:116 +0x48\ngolang.org/x/sync/errgroup.(*Group).Wait(0xc00016ce80)\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:56 +0x25\nstorj.io/storj/private/server.(*Server).Run(0xc000088c80, {0x2134ab0, 0xc0001d60c0})\n\t/go/src/storj.io/storj/private/server/server.go:483 +0x130a\nstorj.io/storj/storagenode.New.func1({0x2134ab0, 0xc0001d60c0})\n\t/go/src/storj.io/storj/storagenode/peer.go:415 +0x1a5\nstorj.io/storj/private/lifecycle.(*Group).Run.func2.1({0x2134ab0?, 0xc0001d60c0?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:87 +0x2a\nruntime/pprof.Do({0x2134d18?, 0xc00034c320?}, {{0xc00034ae20?, 0xc0006d35a0?, 0x1670fe0?}}, 0xc00052be70)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\nstorj.io/storj/private/lifecycle.(*Group).Run.func2()\n\t/go/src/storj.io/storj/private/lifecycle/group.go:86 +0x285\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 7\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:75 +0x96\n\ngoroutine 227 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e340, {0x2134ae8, 0xc0002d1b30})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 266 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e680, {0x2134ae8, 0xc000548780})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 370 [select, 12593 minutes]:\ndatabase/sql.(*DB).connectionOpener(0xc00037e9c0, {0x2134ae8, 0xc000578820})\n\t/usr/local/go/src/database/sql/sql.go:1218 +0x87\ncreated by database/sql.OpenDB in goroutine 7\n\t/usr/local/go/src/database/sql/sql.go:791 +0x165\n\ngoroutine 1022 [runnable]:\nstorj.io/storj/private/lifecycle.(*Group).Run.func1({0x2134ab0?, 0xc0001d6060?})\n\t/go/src/storj.io/storj/private/lifecycle/group.go:73 +0x17f\nruntime/pprof.Do({0x2134d18?, 0xc00034c320?}, {{0xc00034a960?, 0xc000507fd0?, 0x166c6c5?}}, 0xc0006dd9c0)\n\t/usr/local/go/src/runtime/pprof/runtime.go:51 +0x9d\ncreated by storj.io/storj/private/lifecycle.(*Group).Run in goroutine 7\n\t/go/src/storj.io/storj/private/lifecycle/group.go:64 +0x509\n\ngoroutine 1068 [chan receive, 12593 minutes]:\nstorj.io/drpc/drpcmigrate.(*ListenMux).Run(0xc00014ddc0, {0x2134ae8?, 0xc0006fbf40?})\n\t/go/pkg/mod/storj.io/drpc@v0.0.34/drpcmigrate/mux.go:90 +0x12e\nstorj.io/storj/private/server.(*Server).Run.func1()\n\t/go/src/storj.io/storj/private/server/server.go:389 +0x1f\ngolang.org/x/sync/errgroup.(*Group).Go.func1()\n\t/go/pkg/mod/golang.org/x/sync@v0.6.0/errgroup/errgroup.go:78 +0x56\ncreated by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1023\n\t/go/pkg/mod/golan(...)

This seems like the not sufficient resources to run that amount of nodes?
Can you avoid a virtualization or reduce the amount of nodes or increase the RAM on this host?

I have no personal demands. Reporting just to find, if its useful to someone.
all the instances are with same parameters, 4GB RAM, 1 node = 1 disk, 1 node = 1 windows,
only this 1 out of 13 catched that 100% CPU situation :man_shrugging:
Rest is fine low CPU no problem.
Regularly all nodes have low CPU, even this one.
I reported some problems with CPU % in other thread weeks before, but that is mitigated somehow, by itself, (i didn’t do anything)

This is very interesting. But considering the virtualization it is not helpful, unfortunately. Especially for a Windows VM and I’m sure on the Linux hypervisor (kind of expected).

oh no, again.

this time i don’t see HDD is working on any retain.
i have debug.addr:5998 activated, i can take a look, what do You want me to check?

EDIT: i have logs from debug, too long to post (4 x .txt files (~590KB), more than 35000 chars)
where to send, how to post?

(add possibility to add .txt files to the forum maybe?)

from debug.addr.
“What is running at the moment and how long is it running.” :

localhost:5998/mon/ps

[92723218644096686,74178574915277348] storj.io/common/process.root() (elapsed: 46h37m59.751819s)
[2533932087216684165,74178574915277348] storj.io/storj/storagenode.(*Peer).Run() (elapsed: 46h37m58.1532998s)

[5898148841374853946,5879604197646034609] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Download() (elapsed: 46h37m45.773414157s)
[5916693485103673284,5879604197646034609] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m45.773389657s)
[5935238128832492621,5879604197646034609] storj.io/storj/storagenode/piecestore.(*Endpoint).Download() (elapsed: 46h37m45.773381757s)
[7437354270866858959,5879604197646034609] storj.io/storj/storagenode/piecestore.(*Endpoint).sendData() (elapsed: 46h37m45.384254257s)

[6046505991205408646,6027961347476589309] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Download() (elapsed: 46h37m45.696462357s)
[6065050634934227984,6027961347476589309] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m45.696442157s)
[6083595278663047321,6027961347476589309] storj.io/storj/storagenode/piecestore.(*Endpoint).Download() (elapsed: 46h37m45.696431657s)

[8723618791786958317,8705074148058138979] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Download() (elapsed: 46h37m47.611213557s)
[8742163435515777654,8705074148058138979] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m47.611189457s)
[8760708079244596992,8705074148058138979] storj.io/storj/storagenode/piecestore.(*Endpoint).Download() (elapsed: 46h37m47.611179857s)
[5861059553917215271,8705074148058138979] storj.io/storj/storagenode/piecestore.(*Endpoint).sendData() (elapsed: 46h37m45.850552557s)

[7179638520288769558,7161093876559950220] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Retain() (elapsed: 46h40m50.502441056s)
[849230905129810400,7161093876559950220] storj.io/storj/storagenode/piecestore.(*Endpoint).Retain() (elapsed: 46h40m50.179433156s)

[206801434973895258,188256791245075921] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h44m38.237226956s)
[225346078702714596,188256791245075921] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h44m38.237193856s)
[243890722431533933,188256791245075921] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h44m38.237178156s)

[923007885585582300,904463241856762962] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m55.274355557s)
[941552529314401637,904463241856762962] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m55.274323057s)
[960097173043220975,904463241856762962] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m55.274308257s)
[5601434541713744546,904463241856762962] storj.io/storj/storagenode/pieces.(*Writer).Commit() (elapsed: 46h37m45.885550957s)
[5619979185442563884,904463241856762962] storj.io/storj/storagenode/blobstore/filestore.(*blobWriter).Commit() (elapsed: 46h37m45.885454457s)
[5638523829171383221,904463241856762962] storj.io/storj/storagenode/blobstore/filestore.(*Dir).Commit() (elapsed: 46h37m45.885450957s)

[1491207737547416692,1472663093818597354] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m50.890753257s)
[1509752381276236029,1472663093818597354] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m50.890716857s)
[1528297025005055367,1472663093818597354] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m50.890703457s)

[1744148646119322346,1725604002390503009] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m47.344633957s)
[1762693289848141684,1725604002390503009] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m47.344624257s)
[1781237933576961021,1725604002390503009] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m47.344605857s)

[2722641516200068937,2704096872471249600] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h43m8.131717156s)
[2741186159928888275,2704096872471249600] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h43m8.131697556s)
[2759730803657707612,2704096872471249600] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h43m8.139170756s)

[3519258007620289550,3500713363891470212] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m53.808987457s)
[3537802651349108887,3500713363891470212] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m53.808980257s)
[3556347295077928225,3500713363891470212] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m53.808952657s)

[4043684468492920196,4025139824764100859] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m46.632049257s)
[4062229112221739534,4025139824764100859] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m46.632031657s)
[4080773755950558871,4025139824764100859] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m46.632026457s)

[4161636434497401292,4143091790768581954] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m50.233377957s)
[4180181078226220629,4143091790768581954] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m50.233369157s)
[4198725721955039967,4143091790768581954] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m50.233356157s)

[4166159435995028914,4147614792266209576] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h38m23.596002156s)
[4184704079723848251,4147614792266209576] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h38m23.595968856s)
[4203248723452667589,4147614792266209576] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h38m23.595955556s)

[5490166679340828521,5471622035612009184] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m46.033195357s)
[5508711323069647859,5471622035612009184] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m46.033189257s)
[5527255966798467196,5471622035612009184] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m46.033177857s)
[6435943509510614734,5471622035612009184] storj.io/storj/storagenode/pieces.(*Writer).Commit() (elapsed: 46h37m45.702672157s)
[6769747096629362809,5471622035612009184] storj.io/storj/storagenode/blobstore/filestore.(*blobWriter).Commit() (elapsed: 46h37m45.699132657s)
[6788291740358182146,5471622035612009184] storj.io/storj/storagenode/blobstore/filestore.(*Dir).Commit() (elapsed: 46h37m45.699122657s)

[5972327416290131296,5953782772561311959] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m45.780805657s)
[5990872060018950634,5953782772561311959] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m45.780803557s)
[6009416703747769971,5953782772561311959] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m45.780798757s)
[6732657809171724134,5953782772561311959] storj.io/storj/storagenode/pieces.(*Store).Writer() (elapsed: 46h37m45.699677357s)
[6751202452900543471,5953782772561311959] storj.io/storj/storagenode/blobstore/filestore.(*blobStore).Create() (elapsed: 46h37m45.699669657s)

[6379656143356095147,6361111499627275809] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h38m26.365637856s)
[6398200787084914484,6361111499627275809] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h38m26.365628456s)
[6416745430813733822,6361111499627275809] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h38m26.365612256s)

[7629484811786617404,7610940168057798067] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m48.596862057s)
[7648029455515436742,7610940168057798067] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m48.596833757s)
[7666574099244256079,7610940168057798067] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m48.596819157s)

[8044753469141466075,8026208825412646737] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h42m15.150525956s)
[8063298112870285412,8026208825412646737] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h42m15.150505156s)
[8081842756599104750,8026208825412646737] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h42m15.150494856s)

[8575261641956403617,8556716998227584279] storj.io/common/rpc/rpctracing./piecestore.Piecestore/Upload() (elapsed: 46h37m47.791627357s)
[8593806285685222954,8556716998227584279] storj.io/storj/storagenode/piecestore.live-request() (elapsed: 46h37m47.791619957s)
[8612350929414042292,8556716998227584279] storj.io/storj/storagenode/piecestore.(*Endpoint).Upload() (elapsed: 46h37m47.791605757s)

[4814923265861462678,74178574915277348] storj.io/storj/private/server.(*Server).Run() (elapsed: 46h37m57.023746s, orphaned)

[4648021472302088640,74178574915277348] storj.io/storj/storagenode/bandwidth.(*Service).Run() (elapsed: 46h37m57.0256715s, orphaned)

[6891923363489228478,74178574915277348] storj.io/storj/storagenode/collector.(*Service).Run() (elapsed: 46h37m57.0067691s, orphaned)

[6279950120438190340,74178574915277348] storj.io/storj/storagenode/console/consoleserver.(*Server).Run() (elapsed: 46h37m57.0074022s, orphaned)

[5018914346878475390,74178574915277348] storj.io/storj/storagenode/contact.(*Chore).Run() (elapsed: 46h37m57.0230037s, orphaned)

[6706476926201035103,74178574915277348] storj.io/storj/storagenode/forgetsatellite.(*Chore).Run() (elapsed: 46h37m57.0068629s, orphaned)

[6521030488912841728,74178574915277348] storj.io/storj/storagenode/gracefulexit.(*Chore).Run() (elapsed: 46h37m57.0070139s, orphaned)

[5649432233658332865,74178574915277348] storj.io/storj/storagenode/monitor.(*Service).Run() (elapsed: 46h37m57.0198591s, orphaned)

[5927601889590622928,74178574915277348] storj.io/storj/storagenode/orders.(*Service).Run() (elapsed: 46h37m57.0084075s, orphaned)

[5241450071624307440,74178574915277348] storj.io/storj/storagenode/pieces.(*CacheService).Run() (elapsed: 46h37m57.02045s, orphaned)

[5056003634336114065,74178574915277348] storj.io/storj/storagenode/pieces.(*TrashChore).Run() (elapsed: 46h37m57.0217291s, orphaned)
[8099197199571583706,74178574915277348] storj.io/storj/storagenode/pieces.(*Store).EmptyTrash() (elapsed: 46h42m56.528853934s)
[8117741843300403043,74178574915277348] storj.io/storj/storagenode/blobstore/filestore.(*blobStore).EmptyTrash() (elapsed: 46h42m56.528846034s)
[8136286487029222381,74178574915277348] storj.io/storj/storagenode/blobstore/filestore.(*Dir).EmptyTrash() (elapsed: 46h42m56.528842634s)

[5723610808573610215,74178574915277348] storj.io/storj/storagenode/retain.(*Service).Run() (elapsed: 46h37m57.0185438s, orphaned)
[5760700096031248890,74178574915277348] storj.io/storj/storagenode/retain.(*Service).retainPieces(storj.NodeID{0xa2, 0x8b, 0x4f, 0x4, 0xe1, 0xb, 0xae, 0x85, 0xd6, 0x7f, 0x4c, 0x6c, 0xb8, 0x2b, 0xf8, 0xd4, 0xc0, 0xf0, 0xf4, 0x7a, 0x8e, 0xa7, 0x26, 0x27, 0x52, 0x4d, 0xeb, 0x6e, 0xc0, 0x0, 0x0, 0x0}, time.Time(2024-04-30T19:59:59.998475+02:00)) (elapsed: 46h37m57.0194862s)
[5797789383488887565,74178574915277348] storj.io/storj/storagenode/pieces.(*Store).WalkSatellitePiecesToTrash(storj.NodeID{0xa2, 0x8b, 0x4f, 0x4, 0xe1, 0xb, 0xae, 0x85, 0xd6, 0x7f, 0x4c, 0x6c, 0xb8, 0x2b, 0xf8, 0xd4, 0xc0, 0xf0, 0xf4, 0x7a, 0x8e, 0xa7, 0x26, 0x27, 0x52, 0x4d, 0xeb, 0x6e, 0xc0, 0x0, 0x0, 0x0}, time.Time(2024-04-27T19:59:59.998475+02:00)) (elapsed: 46h37m57.0159884s)
[5834878670946526240,74178574915277348] storj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePiecesToTrash() (elapsed: 46h37m57.0114372s)
[8468218080438872165,74178574915277348] storj.io/storj/storagenode/pieces.(*FileWalker).WalkSatellitePieces() (elapsed: 46h37m56.9725872s)
[8505307367896510840,74178574915277348] storj.io/storj/storagenode/blobstore/filestore.(*Dir).WalkNamespace() (elapsed: 46h37m56.9714899s)
[8542396655354149515,74178574915277348] storj.io/storj/storagenode/blobstore/filestore.(*Dir).walkNamespaceInPath() (elapsed: 46h37m56.9714656s)
[7492988202053316971,74178574915277348] storj.io/storj/storagenode/pieces.storedPieceAccess.ModTime() (elapsed: 46h37m45.383118957s)

[4852012553319101353,74178574915277348] storj.io/storj/storagenode/version.(*Chore).Run() (elapsed: 46h37m57.0245149s, orphaned)

EDIT:
7.May.2024 time: 16:04 UTC+2
this node is still at max CPU % usage. im not restarting it, its a living case.
logs i have in .txt files for whoever wants it.

Hello, I have the same problem, CPU at 99%. Not sure what’s causing it, I’ve never had this issue before. Nodes are running on Ubuntu Server with Docker in two different locations and on different hardware.

Bug report has been created:

As a temporary workaround, developers suggest setting the parameter retain.concurrency to 1 (the default value in 1.102.3 is 5). See the link for details.

3 Likes

Hey, Any ideas when this might get fixed?

Not at this time. Did you try settting retain.concurrency to 1 ?

Yes, I have all my nodes set to 1, it fixed it, but I think filewalker is working slower now? cause it takes only 1 sat at a time

Unfortunately this is the only way to allow your slow disk to work in time with less load. Or do not use VMs, if you use them to run nodes.

As example, i have one of my nodes caught using 11% of my cpu.
sounds not much, but i think the extra energy consumption for not running a potato should be taken into account with the next payout increase :laughing:


And it is no VM :innocent:

We can still use a less powerful devices, now they would be less selected, if they are slow to process, however they would have a chance to be selected every configured (on the satellite) interval.

I believe that the first process here is a node itself?

three others looks like filewalkers (you may distinguish them by the command line arguments).