X509: certificate signed by unknown authority for https://tardigrade.io/trusted-satellites

we’re looking into this and appreciate the report - have been discussing this rapidly with various folks in the company – but just wanted to pop back here for a minute and give an update so you kow we are investigating!


We’ve been investigating this issue and while we don’t completely understand all the details, we know enough that it shouldn’t be cause for alarm. Here is some useful insight from @Egon:

Based on my really-late-night look at it.
It is a netlify.com certificate. We use Netlify to manage our web pages. As an example https://www.ssllabs.com/ssltest/analyze.html?d=app.netlify.com&s= shows the exact same certificate.
When a server hosts multiple domains on the same IP, it can send all the certificates at once (rather than the one for a particular domain). I’m not familiar with our netlify setup, but I’m guessing our internal previews might be served from domain that’s under .netlify.com. However, SNI extension can be used to distinguish which exact cert to use. In this case SNI for *.netlify.com is missing. I’m not sure whether this is intentional or not. Either way http clients should be able to pick the correct cert based on Subject.

You can verify that the certificate served by app.netlify.com (here) and the second certificate from these comments by @Odmin are the same by checking the SHA256 hashes.


And more great information, from the one and only @littleskunk:

There is an easy workaround
Affected storage nodes can add this to their config:
storage2.trust.sources: "/mnt/ssd/syncthing/Sync/Transfer/trustedsatellites.txt"
and save the content of the trusted satellite list in that file
That way you can keep your storage node running over the holidays and don’t need to worry about any errors.

File content:


Wouldn’t go this far. Clients will simply not trust this certificate as it doesn’t match the domain. It’s weird and might cause confusion for clients accessing this domain, but even if I somehow got my certificate for my domain to be issued by Storj’s server (to be clear, nothing like this seems to be the case here), it wouldn’t really allow me to do anything I’m not supposed to do.
It would be different if another certificate for the same domain was somehow there. As browsers and other clients would actually accept that one as valid.

It’s weird though and worth figuring out why this one isn’t filtered out by SNI.

1 Like

Thanks a lot, @moby for your quick reaction and a quick resolution!

Sorry, it was my failure, because not all familiar with cert chains and how it working… let me describe the problem in a more “human-readable” manner (add -h :slightly_smiling_face: )

Let’s look into the certs chain from the storagenode side:
openssl s_client -host tardigrade.io -port 443 -prexit -showcerts > /tmp/certs.txt

depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, CN = DigiCert SHA2 Secure Server CA
verify return:1
depth=0 C = US, ST = ca, L = San Francisco, O = "Netlify, Inc", CN = *.netlify.com
verify return:1

Let’s compare it with the simple Let’s encrypt setup (my server):

depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = xxxxx.xxxxxxx.com
verify return:1

Here we can see that during our request to https://tardigrade.io we got 3 certs (certificates chain, the main cert, intermediate cert, the root cert). All of these certs should be verifying on the OS side, as we can see verify return:1 everything is OK on my side…
But why I bring this issue? here you can ask me :slightly_smiling_face: It the right question for another explain:
Root (and intermediate too) certs can be verifying on the two levels, on the root certs local database on OS (for Debian it: ca-certificates package), and root certs provided from the server-side.
I did not have an issue with verifying because I did a regular os update (patch management process) on all my servers and the ca-certificates package always has fresh root certs. But many people not did it on regular basis (any OS) and verification on the local database will be missing, on this scenario verification will switch to the root certs provided from the server-side. The last scenario is our current problem.

So, let’s back to the investigation and NOT look into /tmp/certs.txt (I promised a “human-readable” way :slightly_smiling_face:) Let use a popular tool for checking certs and cert chains and look into result:

So, the cert chain is broken, and people who not did OS updates on a regular basis will have an issue.

How we can solve this issue, we have two scenarios (I don’t know your specific server configuration):

  1. Fix a broken certificate chain add/replace missing/wrong root/intermediate certificates. (if you really need *.netlify.com on this chain)
  2. Ask your hosting provider, why this cert ( *.netlify.com) persist in your certificate chain (maybe this wrong configuration on the hosting panel side) and SNI is not working.

Yes, you are rigth, but let’s look into this situation from the CISO point of view.
How it possible to add this wrong cert to the the tardigrade.io and storj.io domain?
Why is not checked on regular bases?
Who is responsible for regular external vulnerability scan?
All this question is a little part of a big process, and if you try ansfer to this qustions rigth now, it will be “security breach” as the result. The rules are very strict in the security world.

I’m not sure what you mean here. If a root cert isn’t in your trusted cert list, there is nothing the server can do about it. That’s kind of the point of having a trusted CA list to begin with.

In the case of let’s encrypt, they solved this by cross signing their intermediate certificate with their own root as well as an external already trusted root. This way their certs would be trusted while giving OS’s the chance to update the trusted CA lists. Though they may have stopped doing that by now.

All I would say here is that it’s not part of the correct certificate path. It’s just an additional certificate offered by the server, with its own path to a different root cert. SSL Labs doesn’t really even outline this as a problem. And I would say rightfully so. Though I am a little surprised that openssl only shows this wrong cert and path. But I haven’t used openssl much. It’s possible openssl skips SNI entirely and just gets the default server cert as a result of that.

For those unfamiliar, SNI is a way for a single server to host multiple sites and display the right website based on which domain was used to connect to the server. It should then present the correct cert for the domain used. But in this case it seems to also present the default cert for that server.

Since I use multiple domains on my server as well I decided to look into how it presents certs and I’m actually seeing the exact same thing happen. The server presents the cert of the correct domain, but also my own self signed cert that isn’t trusted by anyone. It works perfectly fine. I think this may actually be expected behavior with SNI.


While looking into this, I did notice this though…

I’m not entirely sure why my browser isn’t flipping out about a cert being no longer valid. But there may be something up with the automated let’s encrypt cert renewal for the forum. It should have renewed this one a long time ago.

Edit: Never mind, after CTRL+F5 it showed a new cert that is now valid.

1 Like

I also agree with you here, but we have a very specific problem (some SNO have an issue, but some working fine). I use OpenSSL on my examples because it exists by default, I think we have the same issue on the storagenode side, and some not updated OS have an issue with verification. Also, we well know, that modern browser using SNI, let’s look into the chain from the browser:

The chain is OK, no aliens like *.netlify.com and also pay attention that intermediate cert is changed 07.10.2020.
So, from the storage node side, is an open question, what chain it get, and is it contain a *.netlify.com cert or not?

It’s possible, but if that’s the case, I think the problem is that no server name is specified. For openssl for example you get the correct certificate chain if you use this.

openssl s_client -host tardigrade.io -servername tardigrade.io -port 443 -prexit -showcerts
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R3
verify return:1
depth=0 CN = tardigrade.io
verify return:1

I’m fairly certain that if the storagenode omits this server name, it would just fail for everyone. So pretty sure there is something else going on.

I went back to look at SSL Labs’ test and just now noticed the mouse over.

So the server doesn’t provide this cert at all if you connect with SNI. SSL Labs just specifically goes out to retrieve the No SNI cert as well. So I think this really was a red herring. Everything is working as it should.


We are aware that our aarch64 image doesn’t have an updated CA list, because the upstream does not updated for three years :confused:
The next version of that specific image for that specific platform have a fix (this platform doesn’t supported anymore by the way), we now would use the arm64/v8 upstream image. It’s updated and actual.


I propose another check:

openssl s_client -showcerts -verify 5 -connect tardigrade.io:443 < /dev/null

and with “-servername tardigrade.io
openssl s_client -showcerts -verify 5 -servername tardigrade.io -connect tardigrade.io:443 < /dev/null

So, as you can see, use SNI is very important. Also, the last correct request return two certs (look into sections -----BEGIN CERTIFICATE-----), and one is sitting on the OS side, which should be up to date for correct verification.

ADD: Also, as you can see, the full chain contain 3 certs:

depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R3
verify return:1
depth=0 CN = tardigrade.io
verify return:1

but we have only two certs bodies from the server-side.

The server only delivers the cert for the domain and the intermediate cert. The root cert is supposed to be in the local root store and is never delivered by the server. So yes, this needs to be up to date.

You’re pointing to the i: line. This line show info about the issuer of the certificate, not about the cert itself. So the certs you see are as expected the domain cert and intermediate cert.

So, @Alexey already provide information about the root cause during our dialog.
For the docker environment, I have a proposition, use OS-level root certs instead of embedding it to the container, just add: -v /etc/ssl/certs:/etc/ssl/certs
But maintain the host OS will be the responsibility of the storage node operator.

The idea of containerizing applications is to ensure all dependencies are taken care of. Seems like Alexey already has a fix that would be better than to just shift the issue to the host OS. Besides, your suggestion won’t be as simple on other OS’s.

Anyway, I think there are no open issues then if the CA list will be fixed with the next release.

Today I had the same problem with my RaspberryPi4 and 64bit Raspian. The old node on the same Pi runs with the :latest image, but has also been running for a long time. Today I set up another node. This does not run with the :latest image.

docker pull storjlabs/storagenode:2413a9a0d-v1.18.1-go1.15.5-arm32v6

works fine
Hope this will be fixed with the next version so that I can set up Watchtower for the node.

You’ve got it! There is no real problem here- just that the web pages for storj.io and tardigrade.io are hosted by a service that hosts multiple other websites. These sites are only expected to be accessed via modern web browsers (less than 10 years old), so use of SNI is expected.

The storagenodes never contact tardigrade.io or storj.io directly; they look up satellites with names like us-central-1.tardigrade.io. Satellites are entirely separate from the Storj and Tardigrade websites.


Please look into config.yaml

1 Like

Oh right, that endpoint. I stand corrected!

In that particular case, the connection is made using a standard HTTP library, so it will use SNI in the request.