X509: certificate signed by unknown authority for https://tardigrade.io/trusted-satellites

Have you running your node on FreeNAS?
If so the solution is

I run a synology. The Cert on the machine and from lookup is ok. This message seems to indicate a Cert problem outside my network does it not?

@Alexey @stefanbenten looks like the root cause on the server-side:

Certificate #1: is OK but Certificate #2:

Please inform your responsible person about it because it definitely a big issue and security breach.

The same issue on storj.io site:

Unless these were just fixed I dont see the same errors.
Nevermind I didn’t see the second one to expand. I to see the second one.

no, it not fixed, you can check it with ssllabs and compare for example with github.com.

Certificate #2 - is a big issue.

Yeah your right, but it doesnt even look related to storj at all, Both are using the same one though.

1 Like

It a wrong configuration on the server side, and Certificate #2 should NOT be here.

we’re looking into this and appreciate the report - have been discussing this rapidly with various folks in the company – but just wanted to pop back here for a minute and give an update so you kow we are investigating!


We’ve been investigating this issue and while we don’t completely understand all the details, we know enough that it shouldn’t be cause for alarm. Here is some useful insight from @Egon:

Based on my really-late-night look at it.
It is a netlify.com certificate. We use Netlify to manage our web pages. As an example https://www.ssllabs.com/ssltest/analyze.html?d=app.netlify.com&s= shows the exact same certificate.
When a server hosts multiple domains on the same IP, it can send all the certificates at once (rather than the one for a particular domain). I’m not familiar with our netlify setup, but I’m guessing our internal previews might be served from domain that’s under .netlify.com. However, SNI extension can be used to distinguish which exact cert to use. In this case SNI for *.netlify.com is missing. I’m not sure whether this is intentional or not. Either way http clients should be able to pick the correct cert based on Subject.

You can verify that the certificate served by app.netlify.com (here) and the second certificate from these comments by @Odmin are the same by checking the SHA256 hashes.


And more great information, from the one and only @littleskunk:

There is an easy workaround
Affected storage nodes can add this to their config:
storage2.trust.sources: "/mnt/ssd/syncthing/Sync/Transfer/trustedsatellites.txt"
and save the content of the trusted satellite list in that file
That way you can keep your storage node running over the holidays and don’t need to worry about any errors.

File content:


Wouldn’t go this far. Clients will simply not trust this certificate as it doesn’t match the domain. It’s weird and might cause confusion for clients accessing this domain, but even if I somehow got my certificate for my domain to be issued by Storj’s server (to be clear, nothing like this seems to be the case here), it wouldn’t really allow me to do anything I’m not supposed to do.
It would be different if another certificate for the same domain was somehow there. As browsers and other clients would actually accept that one as valid.

It’s weird though and worth figuring out why this one isn’t filtered out by SNI.

1 Like

Thanks a lot, @moby for your quick reaction and a quick resolution!

Sorry, it was my failure, because not all familiar with cert chains and how it working… let me describe the problem in a more “human-readable” manner (add -h :slightly_smiling_face: )

Let’s look into the certs chain from the storagenode side:
openssl s_client -host tardigrade.io -port 443 -prexit -showcerts > /tmp/certs.txt

depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, CN = DigiCert SHA2 Secure Server CA
verify return:1
depth=0 C = US, ST = ca, L = San Francisco, O = "Netlify, Inc", CN = *.netlify.com
verify return:1

Let’s compare it with the simple Let’s encrypt setup (my server):

depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = xxxxx.xxxxxxx.com
verify return:1

Here we can see that during our request to https://tardigrade.io we got 3 certs (certificates chain, the main cert, intermediate cert, the root cert). All of these certs should be verifying on the OS side, as we can see verify return:1 everything is OK on my side…
But why I bring this issue? here you can ask me :slightly_smiling_face: It the right question for another explain:
Root (and intermediate too) certs can be verifying on the two levels, on the root certs local database on OS (for Debian it: ca-certificates package), and root certs provided from the server-side.
I did not have an issue with verifying because I did a regular os update (patch management process) on all my servers and the ca-certificates package always has fresh root certs. But many people not did it on regular basis (any OS) and verification on the local database will be missing, on this scenario verification will switch to the root certs provided from the server-side. The last scenario is our current problem.

So, let’s back to the investigation and NOT look into /tmp/certs.txt (I promised a “human-readable” way :slightly_smiling_face:) Let use a popular tool for checking certs and cert chains and look into result:

So, the cert chain is broken, and people who not did OS updates on a regular basis will have an issue.

How we can solve this issue, we have two scenarios (I don’t know your specific server configuration):

  1. Fix a broken certificate chain add/replace missing/wrong root/intermediate certificates. (if you really need *.netlify.com on this chain)
  2. Ask your hosting provider, why this cert ( *.netlify.com) persist in your certificate chain (maybe this wrong configuration on the hosting panel side) and SNI is not working.

Yes, you are rigth, but let’s look into this situation from the CISO point of view.
How it possible to add this wrong cert to the the tardigrade.io and storj.io domain?
Why is not checked on regular bases?
Who is responsible for regular external vulnerability scan?
All this question is a little part of a big process, and if you try ansfer to this qustions rigth now, it will be “security breach” as the result. The rules are very strict in the security world.

I’m not sure what you mean here. If a root cert isn’t in your trusted cert list, there is nothing the server can do about it. That’s kind of the point of having a trusted CA list to begin with.

In the case of let’s encrypt, they solved this by cross signing their intermediate certificate with their own root as well as an external already trusted root. This way their certs would be trusted while giving OS’s the chance to update the trusted CA lists. Though they may have stopped doing that by now.

All I would say here is that it’s not part of the correct certificate path. It’s just an additional certificate offered by the server, with its own path to a different root cert. SSL Labs doesn’t really even outline this as a problem. And I would say rightfully so. Though I am a little surprised that openssl only shows this wrong cert and path. But I haven’t used openssl much. It’s possible openssl skips SNI entirely and just gets the default server cert as a result of that.

For those unfamiliar, SNI is a way for a single server to host multiple sites and display the right website based on which domain was used to connect to the server. It should then present the correct cert for the domain used. But in this case it seems to also present the default cert for that server.

Since I use multiple domains on my server as well I decided to look into how it presents certs and I’m actually seeing the exact same thing happen. The server presents the cert of the correct domain, but also my own self signed cert that isn’t trusted by anyone. It works perfectly fine. I think this may actually be expected behavior with SNI.


While looking into this, I did notice this though…

I’m not entirely sure why my browser isn’t flipping out about a cert being no longer valid. But there may be something up with the automated let’s encrypt cert renewal for the forum. It should have renewed this one a long time ago.

Edit: Never mind, after CTRL+F5 it showed a new cert that is now valid.

1 Like

I also agree with you here, but we have a very specific problem (some SNO have an issue, but some working fine). I use OpenSSL on my examples because it exists by default, I think we have the same issue on the storagenode side, and some not updated OS have an issue with verification. Also, we well know, that modern browser using SNI, let’s look into the chain from the browser:

The chain is OK, no aliens like *.netlify.com and also pay attention that intermediate cert is changed 07.10.2020.
So, from the storage node side, is an open question, what chain it get, and is it contain a *.netlify.com cert or not?

It’s possible, but if that’s the case, I think the problem is that no server name is specified. For openssl for example you get the correct certificate chain if you use this.

openssl s_client -host tardigrade.io -servername tardigrade.io -port 443 -prexit -showcerts
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R3
verify return:1
depth=0 CN = tardigrade.io
verify return:1

I’m fairly certain that if the storagenode omits this server name, it would just fail for everyone. So pretty sure there is something else going on.

I went back to look at SSL Labs’ test and just now noticed the mouse over.

So the server doesn’t provide this cert at all if you connect with SNI. SSL Labs just specifically goes out to retrieve the No SNI cert as well. So I think this really was a red herring. Everything is working as it should.


We are aware that our aarch64 image doesn’t have an updated CA list, because the upstream does not updated for three years :confused:
The next version of that specific image for that specific platform have a fix (this platform doesn’t supported anymore by the way), we now would use the arm64/v8 upstream image. It’s updated and actual.