Container restarting | identity folder | ports

Dear all,

I have just finished installing my node on a Synology NAS 920+. The container keeps restarting and I think that I know why: I have messed up with the “identity” folder. But maybe I am wrong. So here what has happened.

I have decided to create the identity on a more powerful computer as it should be quicker than on my Synology 920+. Since on the “powerful” computer there is already a node, the folder in which I created the identity was called “storagenode2”.

Full path:

C:\Users\aseeg\AppData\Roaming\Storj\Identity\storagenode2

When I then copied the folder to the NAS I have decided to move the .cert files from the folder “storagenode2” to the folder “identity” with the idea of removing one step. Once moved over to the NAS the final path was:

/volume1/docker/storj/identity

Now I would like to know if the changes I have made would compromise the certificates somehow. If yes, I would like to know which degree of freedom I have regarding the folder structure, folder name etc.

And of course I would like to know what I should do now, as the container is restarting continuously after a few seconds.

  1. Copy once more without changing any folder structure and name?

  2. Try to rebuild the correct folder structure

  3. Recreate the identity directly on the NAS?

Ports
As I already have a node on the same network and on the same public IP address I need to change the default port from 28967 to, for example, 28968. How should I configure the router though? The router is asking me to specify the external port and the internal port and I am not entirely sure which of the two is the internal an which one is the external one.

-p 28968:28967/tcp

Is 28968 the external one? So in that case the router is forwarding the data it receives on port 28968 to the port 28967 of the NAS. There is no conflict with the existing node because it is on a different computer with a different IP. Is this correct?

Thank you very much for any explanations.

No
To see why your container is restarting, please provide result of the command:

docker logs --tail 10 storagenode

You need to forward 28968 TCP+UDP to the Synology’s IP. In the docker run command you would specify -p 28968:28967/tcp -p 28968:28967/udp -e ADDRESS=your.external.address:28968

1 Like

Hello @Alexey,

please find here below the log:

2022-01-29T23:04:55.953Z INFO contact:service context cancelled {"Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"} 2022-01-29T23:04:55.953Z ERROR contact:service ping satellite failed {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "attempts": 1, "error": "ping satellite: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.TCPConnector.DialContextUnencrypted:114\n\tstorj.io/common/rpc.TCPConnector.DialContext:78\n\tstorj.io/common/rpc.Dialer.dialEncryptedConn:220\n\tstorj.io/common/rpc.Dialer.DialNodeURL.func1:110\n\tstorj.io/common/rpc/rpcpool.(*Pool).get:105\n\tstorj.io/common/rpc/rpcpool.(*Pool).Get:128\n\tstorj.io/common/rpc.Dialer.dialPool:186\n\tstorj.io/common/rpc.Dialer.DialNodeURL:109\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:124\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"} 2022-01-29T23:04:55.953Z INFO contact:service context cancelled {"Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"} 2022-01-29T23:04:55.953Z ERROR contact:service ping satellite failed {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB", "attempts": 1, "error": "ping satellite: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.TCPConnector.DialContextUnencrypted:114\n\tstorj.io/common/rpc.TCPConnector.DialContext:78\n\tstorj.io/common/rpc.Dialer.dialEncryptedConn:220\n\tstorj.io/common/rpc.Dialer.DialNodeURL.func1:110\n\tstorj.io/common/rpc/rpcpool.(*Pool).get:105\n\tstorj.io/common/rpc/rpcpool.(*Pool).Get:128\n\tstorj.io/common/rpc.Dialer.dialPool:186\n\tstorj.io/common/rpc.Dialer.DialNodeURL:109\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:124\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"} 2022-01-29T23:04:55.953Z ERROR contact:service ping satellite failed {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "attempts": 1, "error": "ping satellite: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.TCPConnector.DialContextUnencrypted:114\n\tstorj.io/common/rpc.TCPConnector.DialContext:78\n\tstorj.io/common/rpc.Dialer.dialEncryptedConn:220\n\tstorj.io/common/rpc.Dialer.DialNodeURL.func1:110\n\tstorj.io/common/rpc/rpcpool.(*Pool).get:105\n\tstorj.io/common/rpc/rpcpool.(*Pool).Get:128\n\tstorj.io/common/rpc.Dialer.dialPool:186\n\tstorj.io/common/rpc.Dialer.DialNodeURL:109\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:124\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"} 2022-01-29T23:04:55.953Z INFO contact:service context cancelled {"Satellite ID": "12rfG3sh9NCWiX3ivPjq2HtdLmbqCrvHVEzJubnzFzosMuawymB"} 2022-01-29T23:04:55.953Z INFO contact:service context cancelled {"Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"} 2022-01-29T23:04:55.953Z ERROR contact:service ping satellite failed {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "attempts": 1, "error": "ping satellite: rpc: dial tcp: operation was canceled", "errorVerbose": "ping satellite: rpc: dial tcp: operation was canceled\n\tstorj.io/common/rpc.TCPConnector.DialContextUnencrypted:114\n\tstorj.io/common/rpc.TCPConnector.DialContext:78\n\tstorj.io/common/rpc.Dialer.dialEncryptedConn:220\n\tstorj.io/common/rpc.Dialer.DialNodeURL.func1:110\n\tstorj.io/common/rpc/rpcpool.(*Pool).get:105\n\tstorj.io/common/rpc/rpcpool.(*Pool).Get:128\n\tstorj.io/common/rpc.Dialer.dialPool:186\n\tstorj.io/common/rpc.Dialer.DialNodeURL:109\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatelliteOnce:124\n\tstorj.io/storj/storagenode/contact.(*Service).pingSatellite:95\n\tstorj.io/storj/storagenode/contact.(*Chore).updateCycles.func1:87\n\tstorj.io/common/sync2.(*Cycle).Run:92\n\tstorj.io/common/sync2.(*Cycle).Start.func1:71\n\tgolang.org/x/sync/errgroup.(*Group).Go.func1:57"} 2022-01-29T23:04:55.953Z INFO contact:service context cancelled {"Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"} Error: piecestore monitor: error verifying location and/or readability of storage directory: open config/storage/storage-dir-verification: no such file or directory

I also have given a look at the config.yaml file: it seems like there was no configuration done, it is completely standard and without any of the values that I have entered with the command which is supposed to start the node.

Ports
I understood that I need to open and forward the 28968 port for both protocols, TCP and UDP.

Thanks.

You missed the setup step: Storage Node - Node Operator

I did do it, but most probably did something else wrong. I have eliminated the container from docker, the config.yaml file as well and re-run the setup as well as the run command and now it works. Thanks!

1 Like

I have the same problem. Can’t use the editor in Synology. Probably due missing the correct directory for the identify files. But running the docker command in terminal works.