First node unable to start with multiple errors

Hello,

I’m trying to set up my first node, but I’m not able to get the node online.

Looking at the nodestorage logs, I see a lot of errors, the first one being:

nodestats:cache	Get pricing-model/join date failed	{"error": "context canceled"}

I’m not sure what the problem is, and the error message is not giving me many clues.

Up next are all the errors I’m seeing on the logs:

...
2023-12-14T20:34:10-05:00	ERROR	nodestats:cache	Get pricing-model/join date failed	{"error": "context canceled"}
2023-12-14T20:34:10-05:00	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:34:10-05:00	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:34:10-05:00	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:34:10-05:00	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: context canceled", "errorVerbose": "ping satellite: context canceled\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:203\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:34:10-05:00	INFO	contact:service	context cancelled	{"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2023-12-14T20:34:10-05:00	INFO	contact:service	context cancelled	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-12-14T20:34:10-05:00	INFO	contact:service	context cancelled	{"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-12-14T20:34:10-05:00	INFO	contact:service	context cancelled	{"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
...
2023-12-14T20:34:12-05:00	INFO	pieces:trash	emptying trash started	{"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2023-12-14T20:34:12-05:00	INFO	pieces:trash	emptying trash started	{"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-12-14T20:34:12-05:00	INFO	pieces:trash	emptying trash started	{"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-12-14T20:34:12-05:00	INFO	pieces:trash	emptying trash started	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-12-14T20:35:33-05:00	ERROR	nodestats:cache	Get stats query failed	{"error": "nodestats: EOF; nodestats: EOF; nodestats: EOF; nodestats: EOF", "errorVerbose": "group:\n--- nodestats: EOF\n\tstorj.io/storj/storagenode/nodestats.(*Service).GetReputationStats:74\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheReputationStats.func1:152\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheReputationStats:151\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func2:118\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- nodestats: EOF\n\tstorj.io/storj/storagenode/nodestats.(*Service).GetReputationStats:74\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheReputationStats.func1:152\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheReputationStats:151\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func2:118\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- nodestats: EOF\n\tstorj.io/storj/storagenode/nodestats.(*Service).GetReputationStats:74\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheReputationStats.func1:152\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheReputationStats:151\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func2:118\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- nodestats: EOF\n\tstorj.io/storj/storagenode/nodestats.(*Service).GetReputationStats:74\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheReputationStats.func1:152\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheReputationStats:151\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func2:118\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:36:16-05:00	ERROR	nodestats:cache	Get disk space usage query failed	{"error": "nodestats: EOF; nodestats: EOF; nodestats: EOF; nodestats: EOF", "errorVerbose": "group:\n--- nodestats: EOF\n\tstorj.io/storj/storagenode/nodestats.(*Service).GetDailyStorageUsage:123\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheSpaceUsage.func1:177\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheSpaceUsage:176\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func3:130\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- nodestats: EOF\n\tstorj.io/storj/storagenode/nodestats.(*Service).GetDailyStorageUsage:123\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheSpaceUsage.func1:177\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheSpaceUsage:176\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func3:130\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- nodestats: EOF\n\tstorj.io/storj/storagenode/nodestats.(*Service).GetDailyStorageUsage:123\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheSpaceUsage.func1:177\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheSpaceUsage:176\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func3:130\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- nodestats: EOF\n\tstorj.io/storj/storagenode/nodestats.(*Service).GetDailyStorageUsage:123\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheSpaceUsage.func1:177\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheSpaceUsage:176\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func3:130\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:36:16-05:00	ERROR	nodestats:cache	payouts err	{"satellite": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2023-12-14T20:36:17-05:00	ERROR	nodestats:cache	payouts err	{"satellite": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2023-12-14T20:36:17-05:00	ERROR	nodestats:cache	payouts err	{"satellite": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2023-12-14T20:36:18-05:00	ERROR	nodestats:cache	payouts err	{"satellite": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2023-12-14T20:36:18-05:00	ERROR	nodestats:cache	Get held amount query failed	{"error": "payouts service: node not found: 12GV9xcVHozGh8xBKwnBV1P2UVJXRkry2TBLHtnZuvSkCnmeKpE; payouts service: node not found: 12GV9xcVHozGh8xBKwnBV1P2UVJXRkry2TBLHtnZuvSkCnmeKpE; payouts service: node not found: 12GV9xcVHozGh8xBKwnBV1P2UVJXRkry2TBLHtnZuvSkCnmeKpE; payouts service: node not found: 12GV9xcVHozGh8xBKwnBV1P2UVJXRkry2TBLHtnZuvSkCnmeKpE", "errorVerbose": "group:\n--- payouts service: node not found: 12GV9xcVHozGh8xBKwnBV1P2UVJXRkry2TBLHtnZuvSkCnmeKpE\n\tstorj.io/storj/storagenode/payouts.(*Endpoint).GetPaystub:73\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheHeldAmount.func1:204\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheHeldAmount:196\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func3:135\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- payouts service: node not found: 12GV9xcVHozGh8xBKwnBV1P2UVJXRkry2TBLHtnZuvSkCnmeKpE\n\tstorj.io/storj/storagenode/payouts.(*Endpoint).GetPaystub:73\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheHeldAmount.func1:204\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheHeldAmount:196\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func3:135\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- payouts service: node not found: 12GV9xcVHozGh8xBKwnBV1P2UVJXRkry2TBLHtnZuvSkCnmeKpE\n\tstorj.io/storj/storagenode/payouts.(*Endpoint).GetPaystub:73\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheHeldAmount.func1:204\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheHeldAmount:196\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func3:135\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75\n--- payouts service: node not found: 12GV9xcVHozGh8xBKwnBV1P2UVJXRkry2TBLHtnZuvSkCnmeKpE\n\tstorj.io/storj/storagenode/payouts.(*Endpoint).GetPaystub:73\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheHeldAmount.func1:204\n\tstorj.io/storj/storagenode/nodestats.(*Cache).satelliteLoop:261\n\tstorj.io/storj/storagenode/nodestats.(*Cache).CacheHeldAmount:196\n\tstorj.io/storj/storagenode/nodestats.(*Cache).Run.func3:135\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:36:21-05:00	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 191.111.33.41:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 191.111.33.41:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:209\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:36:22-05:00	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 191.111.33.41:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 191.111.33.41:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:209\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:36:23-05:00	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 191.111.33.41:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 191.111.33.41:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:209\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}
2023-12-14T20:36:24-05:00	ERROR	contact:service	ping satellite failed 	{"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 191.111.33.41:28967: connect: connection timed out", "errorVerbose": "ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 191.111.33.41:28967: connect: connection timed out\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:209\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:157\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

Does anyone know what the problem migth be?

3 posts were split to a new topic: Tcp connector failed: rpc: dial tcp: lookup tcp/28967>: Unrecognized service

Please search for FATAL error in your logs, because context canceled during check from the satellite can happen if the service is stopping.

Seems this could be related to the unsigned identity or your external contact address is not reachable:

Please check everything from this checklist:

  1. Check your identity

The identity seems to be fine:

However, I did saw a FATAL entry on the logs:

2023-12-15T19:03:18-05:00	FATAL	Unrecoverable error	{"error": "invalid contact.external-address: lookup \"masked-noip-host.ddns.net\" failed: lookup masked-noip-host.ddns.net: no such host", "errorVerbose": "invalid contact.external-address: lookup \"masked-noip-host.ddns.net\" failed: lookup masked-noip-host.ddns.net: no such host\n\tstorj.io/storj/storagenode.(*Config).Verify:165\n\tmain.cmdRun:57\n\tmain.newRunCmd.func1:32\n\tstorj.io/private/process.cleanup.func1.4:393\n\tstorj.io/private/process.cleanup.func1:411\n\tgithub.com/spf13/cobra.(*Command).execute:852\n\tgithub.com/spf13/cobra.(*Command).ExecuteC:960\n\tgithub.com/spf13/cobra.(*Command).Execute:897\n\tstorj.io/private/process.ExecWithCustomOptions:112\n\tstorj.io/private/process.ExecWithCustomConfigAndLogger:77\n\tstorj.io/private/process.ExecWithCustomConfig:72\n\tstorj.io/private/process.Exec:62\n\tmain.(*service).Execute.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75"}

Also, using Open Port Check Tool - Test Port Forwarding on Your Router, I see that port 28967 appears to be closed, but I don’t understand why since it’s supposed to be open on the router and forwarding requests to the storage node on that port. On my local network, I’m able to ping port 28967 without issues.

I’m guessing both things are related?

Thank you.

check if you are behind a CG NAT, if so, contact your provider and tell them you want to run an ip camera and need an public ip v4. (not to confuse with an static ip)

1 Like

Perhaps you need to register/activate this noip host, you also need to configure the DDNS updater either on your router in the DDNS section (recommended) or on your host with the node (not recommended).

And as @daki82 said you need to compare the IP from yougetsignal with WAN IP on your router - if they do not match, the port forwarding will not work and you need to contact your ISP to disable CGNAT. You need to have a public IP, it can be dynamic but must be a public (the WAN IP will match the IP from yougetsignal). The alternative is to use a VPN service with port forwarding feature such as portmap.io, ngrok, PIA, AirVPN, PureVPN, etc.

1 Like