After Node migration on new disk dashboard shown as offline misconfigured

Looking the log I find this error:

023-04-27T22:13:05.665Z ERROR contact:service ping satellite failed {Process: storagenode, Satellite ID: 12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs, attempts: 9, error: ping satellite: check-in ratelimit: node rate limited by id, errorVerbose: ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75}
2023-04-27T22:13:06.469Z ERROR contact:service ping satellite failed {Process: storagenode, Satellite ID: 12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB, attempts: 9, error: ping satellite: check-in ratelimit: node rate limited by id, errorVerbose: ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75}
2023-04-27T22:13:07.983Z ERROR contact:service ping satellite failed {Process: storagenode, Satellite ID: 12tRQrMTWUWwzwGh18i7Fqs67kmdhH9t6aToeiwbo5mfS2rUmo, attempts: 9, error: ping satellite: check-in ratelimit: node rate limited by id, errorVerbose: ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75}
2023-04-27T22:13:08.281Z ERROR contact:service ping satellite failed {Process: storagenode, Satellite ID: 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S, attempts: 9, error: ping satellite: check-in ratelimit: node rate limited by id, errorVerbose: ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75}
2023-04-27T22:13:09.516Z ERROR contact:service ping satellite failed {Process: storagenode, Satellite ID: 1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE, attempts: 9, error: ping satellite: check-in ratelimit: node rate limited by id, errorVerbose: ping satellite: check-in ratelimit: node rate limited by id\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:143\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75}
2023-04-27T22:13:14.688Z ERROR contact:service ping satellite failed {Process: storagenode, Satellite ID: 121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6, attempts: 9, error: ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 77.68.79.30:28967: connect: connection refused, errorVerbose: ping satellite: failed to ping storage node, your node indicated error code: 0, rpc: tcp connector failed: rpc: dial tcp 77.68.79.30:28967: connect: connection refused\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:149\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:102\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:99\n\tstorj.io/common/sync2.(*Cycle).Start.func1:77\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:75}

This points to your port forwarding or firewall not being configured correctly. Did internal IP change? Did external IP change?

No I didnt change any on network or firewall side, the docker is still the same

1 Like

You may not have changed it but something has changed. Your node is no longer contactable here - 77.68.79.30:28967.

Post your run command. Check your external IP. Is your node IP still the same as the one listed in the port forwarding rule?

root@VPS-UK:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:50:56:02:05:72 brd ff:ff:ff:ff:ff:ff
inet 77.68.79.30/32 brd 77.68.79.30 scope global dynamic ens192
valid_lft 39432sec preferred_lft 39432sec
inet6 fe80::250:56ff:fe02:572/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:1c:41:9b:05 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:1cff:fe41:9b05/64 scope link
valid_lft forever preferred_lft forever
5: veth3c931ca@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether da:ce:6a:30:98:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::d8ce:6aff:fe30:9806/64 scope link
valid_lft forever preferred_lft forever
7: veth5ccd91b@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 2a:af:6d:83:1b:79 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::28af:6dff:fe83:1b79/64 scope link
valid_lft forever preferred_lft forever
20: veth22fd92a@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e6:0d:96:fc:69:64 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::e40d:96ff:fefc:6964/64 scope link
valid_lft forever preferred_lft forever
root@VPS-UK:~# cat start.sh
docker run -d --restart unless-stopped --stop-timeout 300
-p 28967:28967/tcp
-p 28967:28967/udp
-p 14002:14002
-e WALLET=“0xC47c…59E1”
-e EMAIL=“fa…@yahoo.it”
-e ADDRESS=“77.68.79.30:28967”
-e STORAGE=“3TB”
–user $(id -u):$(id -g)
–mount type=bind,source=“/RSNO05/identity/storagenode”,destination=/app/identity
–mount type=bind,source=“/RSNO05/storage”,destination=/app/config
–name storagenode storjlabs/storagenode:latest
–operator.wallet-features=zksync

Did your public IP change?

The public IP is the same, I’ve try to mount the old disk and the node started correctly, I’ve seen that the directory/files user and grpup owner are different from the old and new disk, the old one is "admin administrators":

drwxrwxrwx 6 admin administrators 4.0K 2023-04-28 08:18 ./
drwxrwxrwx 4 admin administrators 4.0K 2022-12-17 00:08 …/
-rw------- 1 admin administrators 9.6K 2022-12-17 08:10 config.yaml
drwxr-xr-x 3 admin administrators 4.0K 2022-12-17 00:22 identity/
drwx------ 4 admin administrators 4.0K 2022-12-06 00:25 orders/
drwxr-xr-x 2 admin administrators 4.0K 2022-12-17 00:08 @Recently-Snapshot/
-rw------- 1 admin administrators 32K 2023-04-28 08:18 revocations.db
drwx------ 6 admin administrators 4.0K 2023-04-28 08:19 storage/
-rw------- 1 admin administrators 1.4K 2023-04-28 08:18 trust-cache.json

while new one is “root root”:

drwxr-xr-x 6 root root 4096 Apr 27 22:55 ./
drwxr-xr-x 20 root root 4096 Apr 24 07:40 …/
-rw------- 1 root root 9813 Apr 27 21:24 config.yaml
drwxr-xr-x 3 root root 4096 Apr 27 11:47 identity/
drwx------ 2 root root 16384 Apr 24 07:28 lost+found/
drwxr-xr-x 5 root root 4096 Apr 27 05:22 orders/
-rw------- 1 root root 32768 Apr 27 21:31 revocations.db
drwx------ 4 root root 4096 Apr 28 05:25 storage/
-rw------- 1 root root 1374 Apr 27 21:32 trust-cache.json

what I have to do to fix this issue ?

either remove this:

and run you node with sudo, or change the owner to your user:

sudo chown $(id -u):$(id -g) -R /RSNO05/