Re-create node offline

Dear teams,

after re-create new storagenode on synology Docker,it alway show offline, the config as below with error log, Could you please take a look at ?

Regards,
Neo

sudo docker run -d --restart unless-stopped --stop-timeout 300 -p 28969:28967/tcp -p 28969:28967/udp -p 192.168.50.20:14002:14002 -e WALLET=“0xxxxxxxxxxxxxxxxxxxxxxx” -e EMAIL="xxxxxxxx@gmail.com" -e ADDRESS=“xxxxxx:28969” -e STORAGE=“22TB” --mount type=bind,source=“/volume1/docker/storj/identity/”,destination=/app/identity --mount type=bind,source=“/volume1/docker/storj/config”,destination=/app/config --name storagenode storjlabs/storagenode:latest

sudo docker logs --tail 50 storagenode
2024-03-21 05:09:47,141 WARN received SIGQUIT indicating exit request
2024-03-21 05:09:47,142 INFO waiting for storagenode, processes-exit-eventlistener, storagenode-updater to die
2024-03-21T05:09:47Z INFO Got a signal from the OS: “terminated” {“Process”: “storagenode-updater”}
2024-03-21 05:09:47,144 INFO stopped: storagenode-updater (exit status 0)
2024-03-21T05:09:47Z INFO Anonymized tracing enabled {“process”: “storagenode”}
2024-03-21T05:09:47Z INFO Operator email {“process”: “storagenode”, “Address”: “xxxxxxk@gmail.com”}
2024-03-21T05:09:47Z INFO Operator wallet {“process”: “storagenode”, “Address”: “0xxxxxxxxxxxxxxxxxxxxxx”}
2024-03-21T05:09:47Z INFO server kernel support for tcp fast open unknown {“process”: “storagenode”}
2024-03-21T05:09:48Z INFO Telemetry enabled {“process”: “storagenode”, “instance ID”: “12GiGyCRE4WDVoR4Sjzvt6zKLfGYcYVPtiR8J25BLtFRkxVbYfq”}
2024-03-21T05:09:48Z INFO Event collection enabled {“process”: “storagenode”, “instance ID”: “12GiGyCRE4WDVoR4Sjzvt6zKLfGYcYVPtiR8J25BLtFRkxVbYfq”}
2024-03-21T05:09:48Z INFO db.migration Database Version {“process”: “storagenode”, “version”: 54}
2024-03-21T05:09:49Z INFO preflight:localtime start checking local system clock with trusted satellites’ system clock. {“process”: “storagenode”}
2024-03-21 05:09:50,203 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-03-21T05:09:52Z INFO preflight:localtime local system clock is in sync with trusted satellites’ system clock. {“process”: “storagenode”}
2024-03-21T05:09:52Z INFO trust Scheduling next refresh {“process”: “storagenode”, “after”: “7h6m5.699124512s”}
2024-03-21T05:09:52Z INFO bandwidth Performing bandwidth usage rollups {“process”: “storagenode”}
2024-03-21T05:09:52Z INFO Node 12GiGyCRE4WDVoR4Sjzvt6zKLfGYcYVPtiR8J25BLtFRkxVbYfq started {“process”: “storagenode”}
2024-03-21T05:09:52Z INFO Public server started on [::]:7777 {“process”: “storagenode”}
2024-03-21T05:09:52Z INFO Private server started on 127.0.0.1:7778 {“process”: “storagenode”}
2024-03-21T05:09:52Z INFO failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See UDP Buffer Sizes · quic-go/quic-go Wiki · GitHub for details. {“process”: “storagenode”}
2024-03-21T05:09:52Z INFO pieces:trash emptying trash started {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2024-03-21T05:09:52Z INFO pieces:trash emptying trash started {“process”: “storagenode”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”}
2024-03-21T05:09:52Z INFO pieces:trash emptying trash started {“process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
2024-03-21T05:09:52Z INFO pieces:trash emptying trash started {“process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
2024-03-21T05:09:52Z ERROR services unexpected shutdown of a runner {“process”: “storagenode”, “name”: “piecestore:monitor”, “error”: “piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory”, “errorVerbose”: “piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1.1:158\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/monitor.(*Service).Run.func1:141\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2024-03-21T05:09:52Z ERROR collector error during collecting pieces: {“process”: “storagenode”, “error”: “v0pieceinfodb: context canceled”, “errorVerbose”: “v0pieceinfodb: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*v0PieceInfoDB).GetExpired:194\n\tstorj.io/storj/storagenode/pieces.(*Store).GetExpired:576\n\tstorj.io/storj/storagenode/collector.(*Service).Collect:88\n\tstorj.io/storj/storagenode/collector.(*Service).Run.func1:57\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/collector.(*Service).Run:53\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2024-03-21T05:09:52Z ERROR piecestore:cache error during init space usage db: {“process”: “storagenode”, “error”: “piece space used: context canceled”, “errorVerbose”: “piece space used: context canceled\n\tstorj.io/storj/storagenode/storagenodedb.(*pieceSpaceUsedDB).Init:73\n\tstorj.io/storj/storagenode/pieces.(*CacheService).Run:81\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2024-03-21T05:09:52Z ERROR nodestats:cache Get pricing-model/join date failed {“process”: “storagenode”, “error”: “context canceled”}
2024-03-21T05:09:52Z ERROR version failed to get process version info {“process”: “storagenode”, “error”: “version checker client: Get "https://version.storj.io": context canceled”, “errorVerbose”: “version checker client: Get "https://version.storj.io": context canceled\n\tstorj.io/storj/private/version/checker.(*Client).All:68\n\tstorj.io/storj/private/version/checker.(*Client).Process:108\n\tstorj.io/storj/private/version/checker.(*Service).checkVersion:101\n\tstorj.io/storj/private/version/checker.(*Service).CheckVersion:75\n\tstorj.io/storj/storagenode/version.(*Chore).Run.func1:65\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/storj/storagenode/version.(*Chore).Run:64\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2.1:87\n\truntime/pprof.Do:51\n\tstorj.io/storj/private/lifecycle.(*Group).Run.func2:86\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75”}
2024-03-21T05:09:52Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup saltlake.tardigrade.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}
2024-03-21T05:09:52Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE”}
2024-03-21T05:09:52Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup us1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}
2024-03-21T05:09:52Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S”}
2024-03-21T05:09:52Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup eu1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}
2024-03-21T05:09:52Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs”}
2024-03-21T05:09:52Z ERROR contact:service ping satellite failed {“process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”, “attempts”: 1, “error”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled”, “errorVerbose”: “ping satellite: rpc: tcp connector failed: rpc: dial tcp: lookup ap1.storj.io: operation was canceled\n\tstorj.io/common/rpc.HybridConnector.DialContext.func1:190”}
2024-03-21T05:09:52Z INFO contact:service context cancelled {“process”: “storagenode”, “Satellite ID”: “121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6”}
Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory
2024-03-21 05:09:53,373 INFO waiting for storagenode, processes-exit-eventlistener to die
2024-03-21 05:09:53,376 INFO stopped: storagenode (exit status 1)
2024-03-21 05:09:53,377 INFO stopped: processes-exit-eventlistener (terminated by SIGTERM)
2024-03-21 05:09:56,491 INFO Set uid to user 0 succeeded
2024-03-21 05:09:56,511 INFO RPC interface ‘supervisor’ initialized
2024-03-21 05:09:56,512 INFO supervisord started with pid 1
2024-03-21 05:09:57,515 INFO spawned: ‘processes-exit-eventlistener’ with pid 11
2024-03-21 05:09:57,519 INFO spawned: ‘storagenode’ with pid 12
2024-03-21 05:09:57,522 INFO spawned: ‘storagenode-updater’ with pid 13
2024-03-21T05:09:57Z INFO Anonymized tracing enabled {“Process”: “storagenode-updater”}
2024-03-21T05:09:57Z INFO Running on version {“Process”: “storagenode-updater”, “Service”: “storagenode-updater”, “Version”: “v1.95.1”}
2024-03-21T05:09:57Z INFO Downloading versions. {“Process”: “storagenode-updater”, “Server Address”: “https://version.storj.io”}

Is it in NTFS format? I think I’ve seen a problem because it wasn’t in EXT4 format.

--mount type=bind,source=“/volume1/docker/storj/config

--mount type=bind,source=“/volume1/storj/data

I’m writing this part as data. The folder name doesn’t seem to matter, but is the data storage file in the config folder?

Did you followed the official guid for quic?
Check my guide, maybe you missed a step.
https://forum.storj.io/t/my-docker-run-commands-for-multinodes-on-synology-nas/22034

Also check permissions for docker folder.
I don’t use docker folder for anything. My path is /volume1/Storj/.
And you don’t need to specify data path, just identity and config.
Data will be stored in config location.

It is Btrfs, on raid 0. is it reason?

I remember seeing this while browsing forums, but I don’t remember where I saw it. I heard that ext4 is the recommended format.

This is the reason:

Do you have /volume1/docker/storj/config/storage/storage-dir-verification file?
If not, please search where is it. It must be in the data location in the storage subfolder, otherwise you perhaps lost data. In the best case you are trying to use a wrong path to existing data.

this is not a reliable setup

  1. Topics tagged btrfs
  2. With the one disk failure the entire node will gone.
1 Like