Node offline internal error

Hi All, Ok this has probably been awnsered however, I cant find the solution for it

I have setup my node and firewall port is forwarding as i can use telnet to check the port open however my logs are saying the below

2019-10-07T14:12:48.414Z e[34mINFOe[0m Configuration loaded from: /app/config/config.yaml
2019-10-07T14:12:48.435Z e[34mINFOe[0m Operator email: simon.brownridge@gmail.com
2019-10-07T14:12:48.435Z e[34mINFOe[0m operator wallet: 0x138B29C26b84aCBf1529F0906E34757d8B1552c9
2019-10-07T14:12:53.978Z e[34mINFOe[0m version running on version v0.22.1

2019-10-07T14:12:53.990Z e[34mINFOe[0m db.migration Database Version {“version”: 25}

2019-10-07T14:12:54.006Z e[34mINFOe[0m contact:chore Storagenode contact chore starting up
2019-10-07T14:12:54.006Z e[34mINFOe[0m Node 1YEm22ybp2ajD6jPF7kizbR8PdskSwYxDQ9s7262WpkV8sq9gp started
2019-10-07T14:12:54.006Z e[34mINFOe[0m Public server started on [::]:28967
2019-10-07T14:12:54.006Z e[34mINFOe[0m Private server started on 127.0.0.1:7778
2019-10-07T14:12:54.006Z e[34mINFOe[0m bandwidth Performing bandwidth usage rollups
2019-10-07T14:12:54.012Z e[34mINFOe[0m piecestore:monitor Remaining Bandwidth {“bytes”: 20000000000000}
2019-10-07T14:12:54.113Z e[34mINFOe[0m version running on version v0.22.1

2019-10-07T14:28:00.055Z e[34mINFOe[0m version running on version v0.22.1

2019-10-07T14:40:31.472Z e[31mERRORe[0m contact:chore pingSatellites failed {“error”: “rpc error: code = Internal desc = contact: couldn’t connect to client at addr: nommiiss.duckdns.org:28967 due to internal error.”}

2019-10-07T14:42:59.574Z e[34mINFOe[0m version running on version v0.22.1

2019-10-07T14:57:59.747Z e[34mINFOe[0m version running on version v0.22.1

2019-10-07T14:57:59.747Z e[34mINFOe[0m version running on version v0.22.1

2019-10-07T15:12:54.139Z e[34mINFOe[0m bandwidth Performing bandwidth usage rollups
2019-10-07T15:12:59.832Z e[34mINFOe[0m version running on version v0.22.1

can anyone tell me how to fix the error as so i can get my node online and working

Thanks Kind Regards
Simon

If its a new node the sattelite dont know it yet. Let it run for a while and check

Its been up for about 2 weeks

I have just gotten arround to look into it due to work

Vetting process takes ~ 1 month

1 Like

Have you double check the dns. Check your run commands to

John.A can you be more discriptive please… My DDNS is working fine as i use it for other things and my home dns is working as it should be… but will double check the naming resolution.

1 Like

Sorry i really suck a this🙂. Not a tecky at all.
Do you set a port in DDNS or is every port forwarded trough it.
Just what i read in your log it feels like theres something wrong with the ddns or with your run Command. Are u sure it says -p 28967:28967 in your run script?

And as @nerdatwork said vetting can take a month or more depending on traffic.

Your node seems to have never been recorded as Online by storjnet.info…

Node ID 1YEm22ybp2ajD6jPF7kizbR8PdskSwYxDQ9s7262WpkV8sq9gp

So, it looks like there’s something wrong with your DDNS setup.

I’m not familiar with duckdns but I think this is their webpage

EDIT:

The posted “last error message” seems like you might have an identity issue.

  1. Did you sign your identity?
  2. Did you ensure that the CA key file is properly located?

Storj Identity Instructions

Hello @nommiiss,
Welcome to the forum!

Please, check your identity:

so this is where my Linux knolage falls down.

I am using unraid with a Storj-v3 Docker installed

The startup indicates that the /app is mounted as the log says "configuration loaded from: /app/config/config.yaml

is there another way of running that command?

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name=‘StorjNode-V3’ --net=‘host’ -e TZ=“Europe/London” -e HOST_OS=“Unraid” -e ‘TCP_PORT_28967’=‘28967’ -e ‘UDP_PORT_28967’=‘28967’ -e ‘WALLET’=‘0x138B29C26b84aCBf1529F0906E34757d8B1552c9’ -e ‘EMAIL’=‘simon.brownridge@gmail.com’ -e ‘ADDRESS’=‘nommiiss.duckdns.org:28967’ -e ‘BANDWIDTH’=‘20TB’ -e ‘STORAGE’=‘1TB’ -e ‘UDP_PORT_28967’=‘28967’ -v ‘/mnt/user/storj/identity’:‘/app/identity’:‘rw’ -v ‘/mnt/user/storj/’:‘/app/config’:‘rw’ ‘storjlabs/storagenode:alpha’

9f17c833b2a262464d496dde6ff5da9492366fa01196396386828fcd6afb23ed

The command finished successfully!

2019-10-08T10:49:42.071Z e[34mINFOe[0m Configuration loaded from: /app/config/config.yaml
2019-10-08T10:49:42.095Z e[34mINFOe[0m Operator email: simon.brownridge@gmail.com
2019-10-08T10:49:42.095Z e[34mINFOe[0m operator wallet: 0x138B29C26b84aCBf1529F0906E34757d8B1552c9
2019-10-08T10:49:47.673Z e[34mINFOe[0m version running on version v0.22.1

2019-10-08T10:49:47.686Z e[34mINFOe[0m db.migration Database Version {“version”: 25}

2019-10-08T10:49:47.690Z e[34mINFOe[0m contact:chore Storagenode contact chore starting up
2019-10-08T10:49:47.690Z e[34mINFOe[0m Node 1YEm22ybp2ajD6jPF7kizbR8PdskSwYxDQ9s7262WpkV8sq9gp started
2019-10-08T10:49:47.690Z e[34mINFOe[0m Public server started on [::]:28967
2019-10-08T10:49:47.690Z e[34mINFOe[0m Private server started on 127.0.0.1:7778
2019-10-08T10:49:47.690Z e[34mINFOe[0m bandwidth Performing bandwidth usage rollups
2019-10-08T10:49:47.695Z e[34mINFOe[0m piecestore:monitor Remaining Bandwidth {“bytes”: 20000000000000}
2019-10-08T10:49:47.804Z e[34mINFOe[0m version running on version v0.22.1

Look at the updated setup instructions again. You’re still using the old and unsafe -v option to mount the two directories.

From the context of your posts, it seems you are running on a GNU/Linux OS. If true, the default identity directory should be ~/.local/share/storj/identity/storagenode

The identity files are as follows:

  1. ca.cert : This is your node’s root CA cert. This is your official operating CA cert.
  2. ca.key : This is your node’s root CA private key.
  3. ca.<UNIX TIMESTAMP>.cert : This is a backup of the original ca.cert before signing the identity. The timestamp in the file’s name indicates when your identity was authorized via the identity authorize command.
  4. identity.cert : This is your node’s TLS cert. This is your official operating client TLS cert.
  5. identity.key : This is your node’s TLS client private key.
  6. identity.<UNIX TIMESTAMP>.cert : This is a backup of the original identity.cert before signing the identity. The timestamp in the file’s name indicates when your identity was authorized via the identity authorize command.

Since the identity file has been signed by your own CA, you can verify that the signatures are valid by issuing this command in the directory containing the above identity files:

openssl verify -CAfile ca.cert -no_check_time identity.cert

You can perform verification of the Timestamped certs in the same way:

openssl verify -CAfile ca.<UNIX TIMESTAMP>.cert -no_check_time identity.<UNIX TIMESTAMP>.cert

Strangely, the designers of the network security opted for default options which create certificates with no expiration data. Perhaps this was due to network clock sync issues… However, I see this as a potential problem.

If your identity directory does not contain the UNIX TIMESTAMP named files, the identity authorization process probably failed or did not occur.

See my next post for the expected certificate chain in the ca.cert and identity.cert files.

While you are mostly correct, I’d like to mention that the files with a timestamp in the name are simply the unsigned backups created while signing you identity with your token. They aren’t actually the ones that your node uses.

@nommiiss Unraid has a habbit of mounting the array late. This can result in the node starting before the array is available. Since you are also using the -v mounting options, your node will start but instead use an on the fly generated docker volume at the specified mount points. This will result in data loss and eventually disqualification. I’m not sure if that’s what’s causing your immediate issue, but it’s a very risky setup.

I’ll need to look through the keys, certs, and process again to be sure…

But, there’s definitely a non-local public key in the cert store files…

So, perhaps the timestamp files are the backup files, and the ca.cert and identity.cert are replaced with officially signed certs in the authorization process…

openssl storeutl -noout -text identity.cert

Gives three public keys, two that correspond to a locally generated private keys and one that does not match any local private key.

openssl pkey -in identity.key -text
openssl pkey -in ca.key -text

The certificate chain in the signed identity.cert is as follows:

  1. First cert signed by identity.key
  2. Second cert signed by ca.key
  3. Third cert signed by non-local key with a public key ending in: 8f:a2

The certificate chain in the signed ca.cert is as follows:

  1. First cert signed by ca.key
  2. Second cert signed by non-local key with a public key ending in 8f:a2

So…

Yes, it does indeed look like the Unix Timestamped files are the backup files. And the originally created files are replaced with cert stores which include the non-local signatures.

However, all this is very unclear from the documentation. And counting files in a directory is not that useful for actual troubleshooting of “Why is this not working?”

@BrightSilence i dont think it is causing the issue at the moment. The reason why I think this is the mounting has already happend and to be honest the setup has been done from a docker download from the unraid comunity apps.

@anon27637763 i ran the command you suggested

root@Cassini:/mnt/user/storj/identity# openssl verify -CAfile ca.cert -no_check_time identity.cert
identity.cert: OK

thats what i got back I dont want to spend too much money on setting up a pi and buying a hdd for storj as this is not the idea… maybe if there is a way of mounting a shared disk… I will look into this method as I do have a pi at home

I counting not files, if you take a close look to commands:

docker exec -it storagenode grep -c BEGIN /app/identity/ca.cert
docker exec -it storagenode grep -c BEGIN /app/identity/identity.cert

Unfortunately the community made app have a bug. It must use the --mount type=bind instead of -v
The second problem - the unraid mounts a disk array after the docker start
The third problem - the Unraid after update have a bug which leads to corruption of sqlite databases for most of applications which uses the sqlite, include storagenode:

Please, consider to move your node from this platform if it’s possible.

1 Like

I noticed before that you were counting certificates. But others… including the documentation page, ask the user to count files.

documentation.storj.io says:

If your identity folder only contains 4 files after this step, you did not successfully complete the identity authorization. Sign the identity again until you see 6 files in the identity folder.

However, both methods don’t satisfy the question:

Is my identity signed correctly?

The only method to check if an x509 cert store is correct is to verify all the certificates in the cert store. Since every cert has the same issuer… even though the local node CA is not Storj, but the local operator CA… the only method I know of figuring out what is what is to verify the public keys listed on the certs.

Since the identity program has many options, it may be possible for someone to accidentally create two local CA private keys and somehow create a cert chain that includes all locally generated keys. I don’t know, I haven’t tried… but there is an option for “managing certificate authority”…

It seems that this particular issue is not the OP’s problem. However, it would be nice if the “authorization” email included a public key to check the cert chain against… or some other simpler mechanism that checks the certs locally produces an error with a meaningful explanation if there’s a problem with the certs.