Восстановление ноды после миграции с проблемного HDD

если честно - абсолютно не очевидно.
не записывает в директорию, а причин может быть великое множество.
те же самые политики безопасности накатываются примерно 12-15 часов и если они не верны это ещё 12-15 часов, потом ещё…

Просто чек этот снять?
2023-04-29_18-15-43

с другой стороны на остальных дисках тот же чек стоит и они работают:
2023-04-29_18-18-15
2023-04-29_18-17-54
2023-04-29_18-17-37

как писал в начале, CHKDSK был сделан.

Сейчас всё работает?

нет, пока больше ничего не делал, ожидаю помощь.
те же ошибки:

2023-04-30T03:32:49.595Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-04-30T03:32:49.596Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-04-30T03:32:49.595Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-04-30T03:32:49.596Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-04-30T03:32:49.595Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-04-30T03:32:49.595Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-04-30T03:32:49.595Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-04-30T03:32:49.595Z	ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-04-30T03:32:49.597Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2023-04-30T03:32:49.597Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
2023-04-30T03:32:49.597Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-04-30T03:32:49.598Z	INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
2023-04-30T03:32:49.617Z	ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:156\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:400\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:83\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
2023-04-30T03:32:49.631Z	ERROR	collector	unable to update piece info	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "EYNTYLZJQ6JPPFGJFOBJ6JF3CQUITXL4ZXWS57TKYX3D5X5SHPZQ", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).DeleteFailed:99\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteFailed:582\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:109\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-04-30T03:32:49.631Z	ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "EYNTYLZJQ6JPPFGJFOBJ6JF3CQUITXL4ZXWS57TKYX3D5X5SHPZQ", "error": "pieces error: v0pieceinfodb: context canceled", "errorVerbose": "pieces error: v0pieceinfodb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Delete:163\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteExpired:346\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:328\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-04-30T03:32:49.725Z	ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
2023-04-30T03:32:49.769Z	ERROR	collector	unable to update piece info	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "SMNLDIGAPBJZNFJFOASMNKMVX46RS57557M7O22RBK3VHOMGHF2Q", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).DeleteFailed:99\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteFailed:582\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:109\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

Все скопированные ошибки следствие остановки узла, что скорее всего произошло выше по логу. Что его останавливает? Обновление или какая-то ошибка ещё выше (раньше) по логу?

выше ошибок не увидел

INFO	Anonymized tracing enabled	{"Process": "storagenode"}
INFO	Operator email	{"Process": "storagenode", "Address": "xxx@xxx.com"}
INFO	Operator wallet	{"Process": "storagenode", "Address": "xxx"}
INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "xxx"}
INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "xxx"}
INFO	db.migration	Database Version	{"Process": "storagenode", "version": 54}
INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"Process": "storagenode"}
INFO	bandwidth	Performing bandwidth usage rollups	{"Process": "storagenode"}
INFO	Node xxx started	{"Process": "storagenode"}
INFO	Public server started on [::]:7777	{"Process": "storagenode"}
INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "4h42m43.783609126s"}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details.	{"Process": "storagenode"}
INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "J2Z52RGC4GUZAAID24UCVVMMMWM6Q7SNHG3LH5OKYTUY4POMAMZQ"}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:156\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:400\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:83\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
ERROR	collector	unable to update piece info	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "EYNTYLZJQ6JPPFGJFOBJ6JF3CQUITXL4ZXWS57TKYX3D5X5SHPZQ", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).DeleteFailed:99\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteFailed:582\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:109\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "EYNTYLZJQ6JPPFGJFOBJ6JF3CQUITXL4ZXWS57TKYX3D5X5SHPZQ", "error": "pieces error: v0pieceinfodb: context canceled", "errorVerbose": "pieces error: v0pieceinfodb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Delete:163\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteExpired:346\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:328\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
ERROR	collector	unable to update piece info	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "WAORZGG43DJX7KXDLCZO5DT446LJS53JFS52TDBGUAV5IRRG5LIA", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).DeleteFailed:99\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteFailed:582\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:109\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "WAORZGG43DJX7KXDLCZO5DT446LJS53JFS52TDBGUAV5IRRG5LIA", "error": "pieces error: context canceled; v0pieceinfodb: context canceled", "errorVerbose": "pieces error: context canceled; v0pieceinfodb: context canceled\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteExpired:349\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:328\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

Нода была остановлена из за неисправности диска, после этого произвел перенос на новый диск через robocopy /MIR, сделал CHKDSK, поменял политики безопасности и пришёл к этому циклу.

вот это сообщение говорит о том, что ОС запросила остановку узла.
Это может быть сделали вы или какой-то сервис.

Может быть запущен watchtower?
если да, что в логах этого контейнера?

сам ничего не останавливал, проверил проброс портов в антивирусе и на роутере.

остановил, удалил узел;
удалил, создал заново лог;
запустил узел, получил тот же результат;
остановил узел;
копию лога опубликовал выше.

watchtower работает параллельно, как и обычно.
лог:

2022-12-20 18:02:40 time="2022-12-20T15:02:40Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:5bd8d2096d18a5b13a73c041818d39021ef5aa4a5624d85fde11ac8f6b4e33d7)"
2022-12-20 18:02:43 time="2022-12-20T15:02:43Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:5bd8d2096d18a5b13a73c041818d39021ef5aa4a5624d85fde11ac8f6b4e33d7)"
2022-12-20 18:02:45 time="2022-12-20T15:02:45Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:5bd8d2096d18a5b13a73c041818d39021ef5aa4a5624d85fde11ac8f6b4e33d7)"
2022-12-20 18:02:46 time="2022-12-20T15:02:46Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:5bd8d2096d18a5b13a73c041818d39021ef5aa4a5624d85fde11ac8f6b4e33d7)"
2022-12-20 18:02:46 time="2022-12-20T15:02:46Z" level=info msg="Stopping /storagenode1 (d2e2539b99890f04be9b0cfa69cf4c6d5256c57f7eb7a3bc0322316355ecb149) with SIGTERM"
2022-12-20 18:02:47 time="2022-12-20T15:02:47Z" level=info msg="Stopping /storagenode3 (f453a703c513cafca90df8162657af8e5abcf09386f1050f7ed16896865bce57) with SIGTERM"
2022-12-20 18:02:49 time="2022-12-20T15:02:49Z" level=info msg="Stopping /storagenode5 (e5e99a17d66c922044b88863fe0d6a80b9e966d9c4c68d155a49f1658a84472e) with SIGTERM"
2022-12-20 18:02:50 time="2022-12-20T15:02:50Z" level=info msg="Stopping /storagenode2 (ce6ae3a4f21f9c156fc3695a09424ccdc410ac35f3ea45055e711a883bc92a9f) with SIGTERM"
2022-12-20 18:02:51 time="2022-12-20T15:02:51Z" level=info msg="Creating /storagenode2"
2022-12-20 18:02:52 time="2022-12-20T15:02:52Z" level=info msg="Creating /storagenode5"
2022-12-20 18:02:53 time="2022-12-20T15:02:53Z" level=info msg="Creating /storagenode3"
2022-12-20 18:02:54 time="2022-12-20T15:02:54Z" level=info msg="Creating /storagenode1"
2023-01-05 16:40:27 time="2023-01-05T13:40:27Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:6c052fa9240333232c98aa2137cc528a59e295efbae76aa3fe55a4921f6ec4d7)"
2023-01-05 16:40:29 time="2023-01-05T13:40:29Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:6c052fa9240333232c98aa2137cc528a59e295efbae76aa3fe55a4921f6ec4d7)"
2023-01-05 16:40:30 time="2023-01-05T13:40:30Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:6c052fa9240333232c98aa2137cc528a59e295efbae76aa3fe55a4921f6ec4d7)"
2023-01-05 16:40:32 time="2023-01-05T13:40:32Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:6c052fa9240333232c98aa2137cc528a59e295efbae76aa3fe55a4921f6ec4d7)"
2023-01-05 16:40:33 time="2023-01-05T13:40:33Z" level=info msg="Stopping /storagenode2 (617dd27eb47a24ed941517c898235189e89733a9fe67a39cc27f78eb51309695) with SIGTERM"
2023-01-05 16:41:05 time="2023-01-05T13:41:05Z" level=info msg="Stopping /storagenode5 (9b91414872117e2de0da97eb1872a7bba37cbfac16d70feebb0d9326842b5985) with SIGTERM"
2023-01-05 16:41:06 time="2023-01-05T13:41:06Z" level=info msg="Stopping /storagenode3 (87f262e700d0f81b0e29ba879c7618b549fffb1c17a99441ef461c3d34cdb9fb) with SIGTERM"
2023-01-05 16:41:08 time="2023-01-05T13:41:08Z" level=info msg="Stopping /storagenode1 (2b0a7de0f936a67626ece4a8aa9dc7ad280ea80bbc9bdc0c229c4bdd6b66e624) with SIGTERM"
2023-01-05 16:41:09 time="2023-01-05T13:41:09Z" level=info msg="Creating /storagenode1"
2023-01-05 16:41:10 time="2023-01-05T13:41:10Z" level=info msg="Creating /storagenode3"
2023-01-05 16:41:11 time="2023-01-05T13:41:11Z" level=info msg="Creating /storagenode5"
2023-01-05 16:41:12 time="2023-01-05T13:41:12Z" level=info msg="Creating /storagenode2"
2023-02-07 06:02:00 time="2023-02-07T03:02:00Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:9417fad048b0168cb5bd354d83ab561c45896db87ea06cc6541232d4359278ae)"
2023-02-07 06:02:01 time="2023-02-07T03:02:01Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:9417fad048b0168cb5bd354d83ab561c45896db87ea06cc6541232d4359278ae)"
2023-02-07 06:02:03 time="2023-02-07T03:02:03Z" level=info msg="Found new storjlabs/storagenode:latest image (sha256:9417fad048b0168cb5bd354d83ab561c45896db87ea06cc6541232d4359278ae)"
2023-02-07 06:02:04 time="2023-02-07T03:02:04Z" level=info msg="Stopping /storagenode1 (2758e9bb921db33e6e932ec00b160643df7d9e670b927b523493c1a0c782e189) with SIGTERM"
2023-02-07 06:02:06 time="2023-02-07T03:02:06Z" level=info msg="Stopping /storagenode3 (2b36b5f96e4d949f3ead6702811a16f3ab09c20dc1da5dd75a1874c51df3f5ae) with SIGTERM"
2023-02-07 06:02:07 time="2023-02-07T03:02:07Z" level=info msg="Stopping /storagenode5 (40e10262a34bdefc5a2286659c602a6b3ffe47b33c52ec21ad271568461cc1c6) with SIGTERM"
2023-02-07 06:02:08 time="2023-02-07T03:02:08Z" level=info msg="Creating /storagenode5"
2023-02-07 06:02:10 time="2023-02-07T03:02:10Z" level=info msg="Creating /storagenode3"
2023-02-07 06:02:11 time="2023-02-07T03:02:11Z" level=info msg="Creating /storagenode1"
2023-04-15 08:11:27 time="2023-04-15T05:11:27Z" level=info msg="Waiting for running update to be finished..."
2023-04-17 19:27:58 time="2023-04-17T16:27:58Z" level=info msg="Waiting for running update to be finished..."
2023-04-27 07:25:37 time="2023-04-27T04:25:37Z" level=info msg="Waiting for running update to be finished..."

Ну что-то же останавливает узел.
Скопируйте, пожалуйста, логи с момента запуска узла (приватную информацию можно скрыть).

удалил, запустил заново watchtower
в логах узла появились новые ошибки:

WARN	collector	file does not exist	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "SMNLDIGAPBJZNFJFOASMNKMVX46RS57557M7O22RBK3VHOMGHF2Q"}
INFO	collector	deleted expired piece info from DB	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "SMNLDIGAPBJZNFJFOASMNKMVX46RS57557M7O22RBK3VHOMGHF2Q"}
ERROR	collector	error during collecting pieces: 	{"Process": "storagenode", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).GetExpired:39\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:556\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

этот лог с момента запуска узла (почистил от повторов, так как по количеству символов форум не пропускает):

INFO	Anonymized tracing enabled	{"Process": "storagenode"}
INFO	Operator email	{"Process": "storagenode", "Address": "xxx@xxx.com"}
INFO	Operator wallet	{"Process": "storagenode", "Address": "xxx"}
INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "xxx"}
INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "xxx"}
INFO	db.migration	Database Version	{"Process": "storagenode", "version": 54}
INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"Process": "storagenode"}
INFO	bandwidth	Performing bandwidth usage rollups	{"Process": "storagenode"}
INFO	Node xxx started	{"Process": "storagenode"}
INFO	Public server started on [::]:7777	{"Process": "storagenode"}
INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "4h42m43.783609126s"}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details.	{"Process": "storagenode"}
INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "J2Z52RGC4GUZAAID24UCVVMMMWM6Q7SNHG3LH5OKYTUY4POMAMZQ"}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:156\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:400\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:83\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}
ERROR	collector	unable to update piece info	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "EYNTYLZJQ6JPPFGJFOBJ6JF3CQUITXL4ZXWS57TKYX3D5X5SHPZQ", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).DeleteFailed:99\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteFailed:582\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:109\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "EYNTYLZJQ6JPPFGJFOBJ6JF3CQUITXL4ZXWS57TKYX3D5X5SHPZQ", "error": "pieces error: v0pieceinfodb: context canceled", "errorVerbose": "pieces error: v0pieceinfodb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).Delete:163\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteExpired:346\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:328\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
ERROR	collector	unable to update piece info	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "WAORZGG43DJX7KXDLCZO5DT446LJS53JFS52TDBGUAV5IRRG5LIA", "error": "pieceexpirationdb: context canceled", "errorVerbose": "pieceexpirationdb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceExpirationDB).DeleteFailed:99\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteFailed:582\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:109\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	collector	unable to delete piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "WAORZGG43DJX7KXDLCZO5DT446LJS53JFS52TDBGUAV5IRRG5LIA", "error": "pieces error: context canceled; v0pieceinfodb: context canceled", "errorVerbose": "pieces error: context canceled; v0pieceinfodb: context canceled\n\tstorj.io/storj/storagenode/pieces.(*Store).DeleteExpired:349\n\tstorj.io/storj/storagenode/pieces.(*Store).Delete:328\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:97\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:40\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

Не смог найти кусочек на диске чтобы удалить его. Убрал упоминание и пошёл дальше.
А вот следующая ошибка наверняка из-за того, что выше запросили остановку узла.

я не увидел никаких сообщений от storagenode-updater. Возможно вы перенаправили лог в файл? storagenode-updater выдаёт сообщения по-прежнему в консоль, а не в лог файл.

посмотрите, пожалуйста, в логи docker:

docker logs storagenode
docker logs storagenode
Error: No such container: storagenode

логи вынесены в отдельный файл, из него и копирую.
эти логи получил после:

остановил, удалил узел;
удалил, создал заново лог;
запустил узел, получил тот же результат;
остановил узел;

INFO	Anonymized tracing enabled	{"Process": "storagenode"}
INFO	Operator email	{"Process": "storagenode", "Address": "xxx@xxx"}
INFO	Operator wallet	{"Process": "storagenode", "Address": "xxx"}
INFO	server	kernel support for server-side tcp fast open remains disabled.	{"Process": "storagenode"}
INFO	server	enable with: sysctl -w net.ipv4.tcp_fastopen=3	{"Process": "storagenode"}
INFO	Telemetry enabled	{"Process": "storagenode", "instance ID": "xxx"}
INFO	Event collection enabled	{"Process": "storagenode", "instance ID": "xxx"}
INFO	db.migration	Database Version	{"Process": "storagenode", "version": 54}
INFO	preflight:localtime	start checking local system clock with trusted satellites' system clock.	{"Process": "storagenode"}
INFO	preflight:localtime	local system clock is in sync with trusted satellites' system clock.	{"Process": "storagenode"}
INFO	bandwidth	Performing bandwidth usage rollups	{"Process": "storagenode"}
INFO	trust	Scheduling next refresh	{"Process": "storagenode", "after": "5h8m46.463477265s"}
INFO	Node xxx started	{"Process": "storagenode"}
INFO	Public server started on [::]:7777	{"Process": "storagenode"}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
INFO	Private server started on 127.0.0.1:7778	{"Process": "storagenode"}
INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details.	{"Process": "storagenode"}
WARN	collector	file does not exist	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "7CPYF7IJZXJ7HLHAACGN7LEOS3OCZ7PYL7NJEQ6RJLVUD7JM4SIQ"}
INFO	collector	deleted expired piece info from DB	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "7CPYF7IJZXJ7HLHAACGN7LEOS3OCZ7PYL7NJEQ6RJLVUD7JM4SIQ"}
INFO	collector	deleted expired piece info from DB	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "HWBJJVSIUEBFFBHJDYBKMQNBHZBQEVC67GE5S3PI3IY7U4HQNYXQ"}
INFO	collector	deleted expired piece	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Piece ID": "R5Y35PVUTVIUU27FN5Z5GE35D3TZDVC6FVRT6KZU7GWXMKN6S2GA"}
INFO	collector	collect	{"Process": "storagenode", "count": 1}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
INFO	pieces:trash	emptying trash started	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
INFO	Got a signal from the OS: "terminated"	{"Process": "storagenode"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
ERROR	contact:service	ping satellite failed 	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"}
INFO	contact:service	context cancelled	{"Process": "storagenode", "Satellite ID": "12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo"}
ERROR	piecestore:cache	error getting current used space: 	{"Process": "storagenode", "error": "context canceled; context canceled; context canceled; context canceled; context canceled; context canceled", "errorVerbose": "group:\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled\n--- context canceled"}
ERROR	pieces:trash	emptying trash failed	{"Process": "storagenode", "error": "pieces error: filestore error: context canceled", "errorVerbose": "pieces error: filestore error: context canceled\n\tstorj.io/storj/storage/filestore.(*blobStore).EmptyTrash:156\n\tstorj.io/storj/storagenode/pieces.(*BlobsUsageCache).EmptyTrash:316\n\tstorj.io/storj/storagenode/pieces.(*Store).EmptyTrash:400\n\tstorj.io/storj/storagenode/pieces.(*TrashChore).Run.func1.1:83\n\tstorj.io/common/sync2.(*Workplace).Start.func1:89"}

да, я понимаю, что вынесены. Укажите, пожалуйста, правильное имя контейнера

downloading storagenode-updater
--2023-04-30 04:30:35--  https://version.storj.io/processes/storagenode-updater/minimum/url?os=linux&arch=amd64
Resolving version.storj.io (version.storj.io)... ::ffff:34.173.164.90, 34.173.164.90
Connecting to version.storj.io (version.storj.io)|::ffff:34.173.164.90|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 92 [text/plain]
Saving to: 'STDOUT'

     0K                                                       100%  230M=0s

2023-04-30 04:30:36 (230 MB/s) - written to stdout [92/92]

--2023-04-30 04:30:36--  https://github.com/storj/storj/releases/download/v1.76.2/storagenode-updater_linux_amd64.zip
Resolving github.com (github.com)... 140.82.121.3, ::ffff:140.82.121.3
Connecting to github.com (github.com)|140.82.121.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/128089774/ae76d8c3-6e8b-4c0f-a673-e272dee4d8e3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230430%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230430T043037Z&X-Amz-Expires=300&X-Amz-Signature=d92f4e97344ce9a1decc3a89cdc029b9dcd50483915df76edaa7c78af5bc8d77&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=128089774&response-content-disposition=attachment%3B%20filename%3Dstoragenode-updater_linux_amd64.zip&response-content-type=application%2Foctet-stream [following]
--2023-04-30 04:30:36--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/128089774/ae76d8c3-6e8b-4c0f-a673-e272dee4d8e3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230430%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230430T043037Z&X-Amz-Expires=300&X-Amz-Signature=d92f4e97344ce9a1decc3a89cdc029b9dcd50483915df76edaa7c78af5bc8d77&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=128089774&response-content-disposition=attachment%3B%20filename%3Dstoragenode-updater_linux_amd64.zip&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... ::ffff:185.199.110.133, ::ffff:185.199.111.133, ::ffff:185.199.108.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|::ffff:185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 9144544 (8.7M) [application/octet-stream]
Saving to: '/tmp/storagenode-updater.zip'
2023-04-30 04:30:38 (6.98 MB/s) - '/tmp/storagenode-updater.zip' saved [9144544/9144544]

downloading storagenode
--2023-04-30 04:30:39--  https://version.storj.io/processes/storagenode/minimum/url?os=linux&arch=amd64
Resolving version.storj.io (version.storj.io)... ::ffff:34.173.164.90, 34.173.164.90
Connecting to version.storj.io (version.storj.io)|::ffff:34.173.164.90|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84 [text/plain]
Saving to: 'STDOUT'

     0K                                                       100%  190M=0s

2023-04-30 04:30:39 (190 MB/s) - written to stdout [84/84]

--2023-04-30 04:30:39--  https://github.com/storj/storj/releases/download/v1.76.2/storagenode_linux_amd64.zip
Resolving github.com (github.com)... ::ffff:140.82.121.3, 140.82.121.3
Connecting to github.com (github.com)|::ffff:140.82.121.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/128089774/953ccf3d-977c-4234-83c3-e3103f7bce50?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230430%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230430T043040Z&X-Amz-Expires=300&X-Amz-Signature=75ca0d88d6ed88067b41025503923d3e2a24d3f8e319d9712bf75c705643abf2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=128089774&response-content-disposition=attachment%3B%20filename%3Dstoragenode_linux_amd64.zip&response-content-type=application%2Foctet-stream [following]
--2023-04-30 04:30:40--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/128089774/953ccf3d-977c-4234-83c3-e3103f7bce50?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20230430%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230430T043040Z&X-Amz-Expires=300&X-Amz-Signature=75ca0d88d6ed88067b41025503923d3e2a24d3f8e319d9712bf75c705643abf2&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=128089774&response-content-disposition=attachment%3B%20filename%3Dstoragenode_linux_amd64.zip&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17227030 (16M) [application/octet-stream]
Saving to: '/tmp/storagenode.zip'
2023-04-30 04:30:42 (7.09 MB/s) - '/tmp/storagenode.zip' saved [17227030/17227030]
2023-04-30 04:30:43,383 INFO Set uid to user 0 succeeded
2023-04-30 04:30:43,390 INFO RPC interface 'supervisor' initialized
2023-04-30 04:30:43,390 INFO supervisord started with pid 1
2023-04-30 04:30:44,392 INFO spawned: 'processes-exit-eventlistener' with pid 54
2023-04-30 04:30:44,406 INFO spawned: 'storagenode' with pid 55
2023-04-30 04:30:44,421 INFO spawned: 'storagenode-updater' with pid 62
2023-04-30T04:30:44.447Z        INFO    Anonymized tracing enabled      {"Process": "storagenode-updater"}
2023-04-30T04:30:44.453Z        INFO    Running on version      {"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.76.2"}
2023-04-30T04:30:44.453Z        INFO    Downloading versions.   {"Process": "storagenode-updater", "Server Address": "https://version.storj.io"}
2023-04-30T04:30:45.010Z        INFO    Current binary version  {"Process": "storagenode-updater", "Service": "storagenode", "Version": "v1.76.2"}
2023-04-30T04:30:45.010Z        INFO    New version is being rolled out but hasn't made it to this node yet     {"Process": "storagenode-updater", "Service": "storagenode"}
2023-04-30T04:30:45.030Z        INFO    Current binary version  {"Process": "storagenode-updater", "Service": "storagenode-updater", "Version": "v1.76.2"}
2023-04-30T04:30:45.030Z        INFO    New version is being rolled out but hasn't made it to this node yet     {"Process": "storagenode-updater", "Service": "storagenode-updater"}
2023-04-30 04:30:46,031 INFO success: processes-exit-eventlistener entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-04-30 04:30:46,031 INFO success: storagenode entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-04-30 04:30:46,032 INFO success: storagenode-updater entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-04-30 04:31:12,276 WARN received SIGTERM indicating exit request
2023-04-30 04:31:12,276 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2023-04-30T04:31:12.276Z        INFO    Got a signal from the OS: "terminated"  {"Process": "storagenode-updater"}
2023-04-30 04:31:12,278 INFO stopped: storagenode-updater (exit status 0)
2023-04-30 04:31:13,940 INFO stopped: storagenode (exit status 0)
2023-04-30 04:31:13,941 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)

если на данном этапе это имеет какое то значение:
проверил порт узла через portchecker - порт закрыт.

так, что-то останавливает узел снаружи и это не storagenode-updater, потому что новая версия хоть и доступна но ещё не должна быть применена к вашему узлу.

На это же время посмотрите события:

docker events --since 2023-04-30T04:31:12Z

Ещё, в вашей команде docker run есть --restart unless-stopped или --restart always?

2023-04-30T07:31:12.276949981+03:00 container kill 2d211f86281ce938bdee5ed14968a6b3d27f8144480684cd09b3cf1c8a4f4318 (desktop.docker.io/mounts/0/Source=F:\Identity\storagenode2\, desktop.docker.io/mounts/0/SourceKind=hostFile, desktop.docker.io/mounts/0/Target=/app/identity, desktop.docker.io/mounts/1/Source=F:\data\, desktop.docker.io/mounts/1/SourceKind=hostFile, desktop.docker.io/mounts/1/Target=/app/config, desktop.docker.io/mounts/2/Source=D:\Storj\Logs\node2.log, desktop.docker.io/mounts/2/SourceKind=hostFile, desktop.docker.io/mounts/2/Target=/app/logs/node2.log, image=storjlabs/storagenode:latest, name=storagenode2, signal=15)
2023-04-30T07:31:13.977489097+03:00 container die 2d211f86281ce938bdee5ed14968a6b3d27f8144480684cd09b3cf1c8a4f4318 (desktop.docker.io/mounts/0/Source=F:\Identity\storagenode2\, desktop.docker.io/mounts/0/SourceKind=hostFile, desktop.docker.io/mounts/0/Target=/app/identity, desktop.docker.io/mounts/1/Source=F:\data\, desktop.docker.io/mounts/1/SourceKind=hostFile, desktop.docker.io/mounts/1/Target=/app/config, desktop.docker.io/mounts/2/Source=D:\Storj\Logs\node2.log, desktop.docker.io/mounts/2/SourceKind=hostFile, desktop.docker.io/mounts/2/Target=/app/logs/node2.log, exitCode=0, image=storjlabs/storagenode:latest, name=storagenode2)
2023-04-30T07:31:14.912744973+03:00 network disconnect d1f8814ec2fc0b08a428aa3af2aab05142840e88bf3ab9c9f532f09e2ce3aba2 (container=2d211f86281ce938bdee5ed14968a6b3d27f8144480684cd09b3cf1c8a4f4318, name=bridge, type=bridge)
2023-04-30T07:31:14.947000373+03:00 container stop 2d211f86281ce938bdee5ed14968a6b3d27f8144480684cd09b3cf1c8a4f4318 (desktop.docker.io/mounts/0/Source=F:\Identity\storagenode2\, desktop.docker.io/mounts/0/SourceKind=hostFile, desktop.docker.io/mounts/0/Target=/app/identity, desktop.docker.io/mounts/1/Source=F:\data\, desktop.docker.io/mounts/1/SourceKind=hostFile, desktop.docker.io/mounts/1/Target=/app/config, desktop.docker.io/mounts/2/Source=D:\Storj\Logs\node2.log, desktop.docker.io/mounts/2/SourceKind=hostFile, desktop.docker.io/mounts/2/Target=/app/logs/node2.log, image=storjlabs/storagenode:latest, name=storagenode2)