Installationsprobleme neuer Knoten Synology und Docker

Hallo,
ich betreibe seit ca 3 Jahren zwei Knoten auf meiner Synology, diese laufen bis jetzt Problemlos. Ich wollte bei einem Bekannte einen neuen Knoten auf seiner Synology einrichten. Ich habe den folgenden Log:

2024-10-15 21:46:24,582 INFO stopped: storagenode-updater (exit status 0)
2024-10-15 21:46:24,583 INFO stopped: storagenode (terminated by SIGTERM)
2024-10-15 21:46:24,584 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)
2024-10-15 21:46:29,614 INFO Set uid to user 0 succeeded
2024-10-15 21:46:29,626 INFO RPC interface 'supervisor' initialized
2024-10-15 21:46:29,627 INFO supervisord started with pid 1
2024-10-15 21:46:30,629 INFO spawned: 'processes-exit-eventlistener' with pid 11
2024-10-15 21:46:30,631 INFO spawned: 'storagenode' with pid 12
2024-10-15 21:46:30,633 INFO spawned: 'storagenode-updater' with pid 13
2024-10-15T21:46:30Z	INFO	Anonymized tracing enabled	{"Process": "storagenode-updater"}
2024-10-15T21:46:30Z	INFO	Running on version	{"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.113.2"}
2024-10-15T21:46:30Z	INFO	Downloading versions.	{"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2024-10-15T21:46:30Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2024-10-15T21:46:30Z	INFO	Operator email	{"Process": "storagenode", "Address": "marco@steinbach-home.de"}
2024-10-15T21:46:30Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
2024-10-15T21:46:31Z	INFO	server	kernel support for tcp fast open unknown	{"Process": "storagenode"}
2024-10-15T21:46:31Z	INFO	Current binary version	{"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.113.2"}
2024-10-15T21:46:31Z	INFO	New version is being rolled out but hasn't made it to this node yet	{"Process": "storagenode-updater", "Service": "storagenode"}
2024-10-15T21:46:31Z	INFO	Current binary version	{"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.113.2"}
2024-10-15T21:46:31Z	INFO	New version is being rolled out but hasn't made it to this node yet	{"Process": "storagenode-updater", "Service": "storagenode-updater"}
2024-10-15T21:46:31Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "12bQ4nCpDCdCe5Xyo65nHeDtC73f9EpF2GT69V4Cgz1nisbkbBL"}
2024-10-15T21:46:31Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "12bQ4nCpDCdCe5Xyo65nHeDtC73f9EpF2GT69V4Cgz1nisbkbBL"}
2024-10-15 21:46:31,678 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-15 21:46:31,678 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-15 21:46:31,678 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2024-10-15T21:46:31Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 61}
2024-10-15T21:46:34Z	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
2024-10-15T21:46:35Z	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"Process": "storagenode"}
2024-10-15T21:46:35Z	INFO	Node 12bQ4nCpDCdCe5Xyo65nHeDtC73f9EpF2GT69V4Cgz1nisbkbBL started	{"Process": "storagenode"}
2024-10-15T21:46:35Z	INFO	Public server started on [::]:7777	{"Process": "storagenode"}
2024-10-15T21:46:35Z	INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
2024-10-15T21:46:35Z	INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.	{"Process": "storagenode"}
2024-10-15T21:46:35Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2024-10-15T21:46:35Z	INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "5h37m48.87365591s"}
2024-10-15T21:46:35Z	INFO	bandwidth	Persisting bandwidth usage cache to db	{"Process": "storagenode"}
2024-10-15T21:46:35Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 0}
2024-10-15T21:46:35Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-15T21:46:35Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-15T21:46:35Z	ERROR	services	unexpected shutdown of a runner	{"Process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:159\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-15T21:46:35Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-15T21:46:35Z	ERROR	version	failed to get process version info	{"Process": "storagenode", "error": "version checker client: Get \"https://version.storj.io\": context canceled", "errorVerbose": "version checker client: Get \"https://version.storj.io\": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:89\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:104\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:78\n\tstorj.io/storj/storagenode/version.(*Chore).checkVersion:115\n\tstorj.io/storj/storagenode/version.(*Chore).RunOnce:71\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/version.(*Chore).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-15T21:46:35Z	ERROR	nodestats:cache	Get pricing-model/join date failed	{"Process": "storagenode", "error": "context canceled"}
2024-10-15T21:46:35Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-10-15T21:46:35Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-10-15T21:46:35Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-10-15T21:46:35Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-10-15T21:46:35Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-10-15T21:46:35Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-15T21:46:35Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess exited with status	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "status": -1, "error": "signal: killed"}
2024-10-15T21:46:35Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: lazyfilewalker: signal: killed", "errorVerbose": "pieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:196\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:447\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-10-15T21:46:35Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-10-15T21:46:35Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-10-15T21:46:37Z	ERROR	failure during run	{"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:159\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory
2024-10-15 21:46:37,710 INFO exited: storagenode (exit status 1; not expected)
2024-10-15 21:46:38,714 INFO spawned: 'storagenode' with pid 47
2024-10-15 21:46:38,714 WARN received SIGQUIT indicating exit request
2024-10-15 21:46:38,715 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024-10-15T21:46:38Z	INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode-updater"}
2024-10-15 21:46:38,721 INFO stopped: storagenode-updater (exit status 0)
2024-10-15T21:46:38Z	INFO	Anonymized tracing enabled	{"Process": "storagenode"}
2024-10-15T21:46:38Z	INFO	Operator email	{"Process": "storagenode", "Address": "marco@steinbach-home.de"}
2024-10-15T21:46:38Z	INFO	Operator wallet	{"Process": "storagenode", "Address": "0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
2024-10-15T21:46:39Z	INFO	server	kernel support for tcp fast open unknown	{"Process": "storagenode"}
2024-10-15T21:46:39Z	INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "12bQ4nCpDCdCe5Xyo65nHeDtC73f9EpF2GT69V4Cgz1nisbkbBL"}
2024-10-15T21:46:39Z	INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "12bQ4nCpDCdCe5Xyo65nHeDtC73f9EpF2GT69V4Cgz1nisbkbBL"}
2024-10-15T21:46:39Z	INFO	db.migration	Database Version	{"Process": "storagenode", "version": 61}
2024-10-15T21:46:41Z	INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
2024-10-15 21:46:41,842 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-10-15T21:46:42Z	INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"Process": "storagenode"}
2024-10-15T21:46:42Z	INFO	collector	expired pieces collection started	{"Process": "storagenode"}
2024-10-15T21:46:42Z	INFO	collector	expired pieces collection completed	{"Process": "storagenode", "count": 0}
2024-10-15T21:46:42Z	INFO	Node 12bQ4nCpDCdCe5Xyo65nHeDtC73f9EpF2GT69V4Cgz1nisbkbBL started	{"Process": "storagenode"}
2024-10-15T21:46:42Z	INFO	Public server started on [::]:7777	{"Process": "storagenode"}
2024-10-15T21:46:42Z	INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
2024-10-15T21:46:42Z	INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.	{"Process": "storagenode"}
2024-10-15T21:46:42Z	INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "5h20m47.612138596s"}
2024-10-15T21:46:42Z	INFO	bandwidth	Persisting bandwidth usage cache to db	{"Process": "storagenode"}
2024-10-15T21:46:42Z	INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-15T21:46:42Z	INFO	lazyfilewalker.trash-cleanup-filewalker	starting subprocess	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-15T21:46:42Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess started	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-15T21:46:42Z	ERROR	services	unexpected shutdown of a runner	{"Process": "storagenode", "name": "piecestore:monitor", "error": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:159\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-15T21:46:42Z	ERROR	nodestats:cache	Get pricing-model/join date failed	{"Process": "storagenode", "error": "context canceled"}
2024-10-15T21:46:42Z	ERROR	piecestore:cache	error persisting cache totals to the database: 	{"Process": "storagenode", "error": "piece space used: context canceled", "errorVerbose": "piece space used: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).UpdateTrashTotal:150\n\tstorj.io/storj/storagenode/pieces.(*CacheService).PersistCacheTotals:143\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run.func1:108\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-15T21:46:42Z	ERROR	version	failed to get process version info	{"Process": "storagenode", "error": "version checker client: Get \"https://version.storj.io\": context canceled", "errorVerbose": "version checker client: Get \"https://version.storj.io\": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:89\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:104\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:78\n\tstorj.io/storj/storagenode/version.(*Chore).checkVersion:115\n\tstorj.io/storj/storagenode/version.(*Chore).RunOnce:71\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/version.(*Chore).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
2024-10-15T21:46:42Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-10-15T21:46:42Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-10-15T21:46:42Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-10-15T21:46:42Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-10-15T21:46:42Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-10-15T21:46:42Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-10-15T21:46:42Z	INFO	lazyfilewalker.trash-cleanup-filewalker	subprocess exited with status	{"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "status": -1, "error": "signal: killed"}
2024-10-15T21:46:42Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: lazyfilewalker: signal: killed", "errorVerbose": "pieces error: lazyfilewalker: signal: killed\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*process).run:85\n\tstorj.io/storj/storagenode/pieces/lazyfilewalker.(*Supervisor).WalkCleanupTrash:196\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:447\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:84\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2024-10-15T21:46:42Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled", "errorVerbose": "ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190"}
2024-10-15T21:46:42Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-10-15T21:46:45Z	ERROR	failure during run	{"Process": "storagenode", "error": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory", "errorVerbose": "piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:159\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:140\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:78"}
Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory
2024-10-15 21:46:45,357 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-10-15 21:46:45,363 INFO stopped: storagenode (exit status 1)
2024-10-15 21:46:45,364 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

Ich habe dann das ganze mit einem dritten Knoten auf meiner Synology probiert und habe den gleichen Fehler. Ich habe die gemounteten Ordner mit chmod -R 777, chown, uid und gid dem container mitgegeben aber alle BemĂĽhungen schlagen fehl. Meine Docker Compose sieht so aus:

version: "3.3"
services:
  storagenode:
    image: storjlabs/storagenode:latest
    container_name: storagenode5
    volumes:
      - type: bind
        source: "/volume4/Storj5/node5/identity"
        target: /app/identity
      - type: bind
        source: "/volume4/Storj5/node5/data"
        target: /app/config
    ports:
      - "28969:28967/tcp" 
      - "28969:28967/udp"
      - "14004:14002/tcp"
    restart: unless-stopped
    stop_grace_period: 300s
    environment:
      - WALLET=0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      - EMAIL=xxxxxx@xxxxxxxxxx.de
      - ADDRESS=xxxxxxxxx.synology.me:28969
      - STORAGE=5.5TB

Vielleicht kann mir ja jemand einen Tipp geben woran das liegen könnte oder ob sich was am Setup geändert hat.

Hallo @Tipitopi,
Willkommen zurĂĽck!

Dieser Fehler bedeutet, dass auf die Schutzdatei nicht mehr zugegriffen werden kann.
Bitte zeigen Sie die Ausgabe dieser Befehle:

ls -l "/volume4/Storj5/node5/data"
ls -l "/volume4/Storj5/node5/data/storage"

Wenn Sie die UID und GID Ihres Benutzers verwendet haben, um den Eigentümer der Daten mit dem Befehl chown zu ändern, müssen Sie diesen Benutzer und seine Gruppe auch zu Ihrer Datei docker-compose.yaml hinzufügen:

version: "3.3"
services:
  storagenode:
    image: storjlabs/storagenode:latest
    container_name: storagenode5
    user: "UID:GID"
    volumes:
....

Oder Sie können den Besitzer für die Daten in root:root ändern. Dann ist es nicht erforderlich, den Benutzer zu docker-compose.yaml hinzuzufügen.

Guten Morgen Alexey,

hier die AuszĂĽge:

root@NAS:/volume1/docker# ls -l "/volume4/Storj5/node5/data/storage"
total 1256
----------+ 1 root root 36864 Oct 16 08:32 bandwidth.db
----------+ 1 root root 32768 Oct 16 08:32 bandwidth.db-shm
----------+ 1 root root     0 Oct 16 08:32 bandwidth.db-wal
d---------+ 2 root root  4096 Oct 16 08:29 blobs
----------+ 1 root root 24576 Oct 16 08:32 garbage_collection_filewalker_progress.db
----------+ 1 root root 32768 Oct 16 08:32 garbage_collection_filewalker_progress.db-shm
----------+ 1 root root 32992 Oct 16 08:32 garbage_collection_filewalker_progress.db-wal
----------+ 1 root root 32768 Oct 16 08:32 heldamount.db
----------+ 1 root root 32768 Oct 16 08:32 heldamount.db-shm
----------+ 1 root root     0 Oct 16 08:32 heldamount.db-wal
----------+ 1 root root 16384 Oct 16 08:32 info.db
----------+ 1 root root 32768 Oct 16 08:32 info.db-shm
----------+ 1 root root     0 Oct 16 08:32 info.db-wal
----------+ 1 root root 24576 Oct 16 08:32 notifications.db
----------+ 1 root root 32768 Oct 16 08:32 notifications.db-shm
----------+ 1 root root     0 Oct 16 08:32 notifications.db-wal
----------+ 1 root root 32768 Oct 16 08:32 orders.db
----------+ 1 root root 32768 Oct 16 08:32 orders.db-shm
----------+ 1 root root 32992 Oct 16 08:32 orders.db-wal
----------+ 1 root root 28672 Oct 16 08:32 piece_expiration.db
----------+ 1 root root 32768 Oct 16 08:32 piece_expiration.db-shm
----------+ 1 root root 41232 Oct 16 08:32 piece_expiration.db-wal
d---------+ 2 root root  4096 Oct 16 08:31 piece_expirations
----------+ 1 root root 24576 Oct 16 08:32 pieceinfo.db
----------+ 1 root root 32768 Oct 16 08:32 pieceinfo.db-shm
----------+ 1 root root 32992 Oct 16 08:32 pieceinfo.db-wal
----------+ 1 root root 24576 Oct 16 08:32 piece_spaced_used.db
----------+ 1 root root 32768 Oct 16 08:32 piece_spaced_used.db-shm
----------+ 1 root root     0 Oct 16 08:32 piece_spaced_used.db-wal
----------+ 1 root root 24576 Oct 16 08:32 pricing.db
----------+ 1 root root 32768 Oct 16 08:32 pricing.db-shm
----------+ 1 root root 32992 Oct 16 08:32 pricing.db-wal
----------+ 1 root root 24576 Oct 16 08:32 reputation.db
----------+ 1 root root 32768 Oct 16 08:32 reputation.db-shm
----------+ 1 root root 32992 Oct 16 08:32 reputation.db-wal
----------+ 1 root root 32768 Oct 16 08:32 satellites.db
----------+ 1 root root 32768 Oct 16 08:32 satellites.db-shm
----------+ 1 root root 41232 Oct 16 08:32 satellites.db-wal
----------+ 1 root root 24576 Oct 16 08:32 secret.db
----------+ 1 root root 32768 Oct 16 08:32 secret.db-shm
----------+ 1 root root    32 Oct 16 08:32 secret.db-wal
----------+ 1 root root 24576 Oct 16 08:32 storage_usage.db
----------+ 1 root root 32768 Oct 16 08:32 storage_usage.db-shm
----------+ 1 root root     0 Oct 16 08:32 storage_usage.db-wal
d---------+ 2 root root  4096 Oct 16 08:29 temp
d---------+ 2 root root  4096 Oct 16 08:31 trash
----------+ 1 root root 20480 Oct 16 08:32 used_serial.db
----------+ 1 root root 32768 Oct 16 08:32 used_serial.db-shm
----------+ 1 root root 41232 Oct 16 08:32 used_serial.db-wal
----------+ 1 root root 24576 Oct 16 08:32 used_space_per_prefix.db
----------+ 1 root root 32768 Oct 16 08:32 used_space_per_prefix.db-shm
----------+ 1 root root     0 Oct 16 08:32 used_space_per_prefix.db-wal


root@NAS:~# ls -l "/volume4/Storj5/node5/data"
total 48
d---------+ 4 root root  4096 Oct 15 21:12 orders
d---------+ 2 root root  4096 Oct 15 21:12 retain
----------+ 1 root root 32768 Oct 16 07:47 revocations.db
d---------+ 6 root root  4096 Oct 16 07:47 storage
----------+ 1 root root   933 Oct 16 07:47 trust-cache.json
root@NAS:~#

Die Datei storage-dir-verification wird garnicht erstellt.

UID und GID hatte ich vorher probiert, deswegen war das nicht mehr in der compose Datei. Habe so viel rum Probiert :slight_smile:

Das ist jetzt ĂĽbrigends eine neue Installation, ich habe bei den Ordnern jetzt kein chmod gemacht. Aber es werden ja Dateien und Ordner erstellt dann kann es doch nicht an rechten liegen oder? Ich hatte auch den ganzen Ordner mit chmod -R 777 gesetzt und es war das gleiche.

In diesem Fall ist diese Datei irgendwo verschwunden, da sie einmal erstellt wird, wenn Sie einen Knoten mit der Flagge starten SETUP=true.
Wenn es sich um einen neuen Knoten handelt, der noch nicht online war, können Sie dies einmal ausführen, die Konfigurationsdatei muss jedoch gelöscht werden, da der Knoten sonst die Erstellung dieser Schutzdatei verweigert.

Oh man… der setup=true Parameter ist an mir vorbei gegangen :expressionless:
ich hatte die ganze Zeit versucht mit der Dockercompose Datei die ich von meinem laufenden Knoten kopiert und abgeändert hatte einen neuen Knoten zu starten.
Danke dir für deine Hilfe…

1 Like

Scheint, als hätten Sie auch die gesamte Ordnerstruktur manuell erstellt, sonst kann ich nicht erklären, wie Sie zumindest an einen Unterordner blobs gekommen sind.

ja die Ordner storage/blobs, storage/temp und storage/trash hatte ich erstellt weil ohne den setup Parameter hatte der Knoten sich beschwert das diese nicht da sind, danach kamen aber dann die anderen Fehler und hatte mich dann hier gemeldet.
Aber der Knoten läuft jetzt… Ich hatte halt nur - SETUP=true vergessen.

1 Like