QUIC Misconfigured

My nodes on docker updated 3h ago. The normal node does not report the quic configuration.

The test node shows the working configuration

In my case replacing in the startup command line “-p 28967:28967” by “-p 28967:28967/tcp -p 28967:28967/udp” fixed the issue.

14 Likes

Can i just add my experience with this issue (if it is an issue). I’ve had one node running since beginning of v3 which is showing V1.47.3, everything shows ok, firewall for router and windows shows both tcp an udp both open. Second node showed udp closed on windows firewall. Now shown QUIC Misconfigured on dashboard. Now it’s open and storage node restarted the Quic status shows OK. But on the original node there is no Quic status even shown even though on same revision. Is there any reason for this? Surely continuity would show the status on both nodes.

Me thinks this is an error on storj part.
Pingdom show everything OK.

QUIC with fallback to TCP would work fine… That’s how IPFS functions at the moment. My TCP IPFS nodes typically have about 1200 clients each and support a large amount of network bandwidth.

However, with the TCP fallback, TLS is still required to be configured and maintained. So, the need to create certificates will be the same…

According to many various technical papers and blogs… like this one on Medium … QUIC has a few benefits but isn’t really a “one-size-fits-all” type solution.

I hope Storj doesn’t go full bore on QUIC… Most ISPs have some level of UDP throttling and setting up UDP tunnels through TCP kind of removes the supposed benefits of QUIC. It would be a big mistake to deploy QUIC as the default for new data.

It seems like switching to QUIC is a bit like fixing something that isn’t broken and breaking things in the process.

1 Like

I’m not entirely sure I understand what point you are making. The use of TLS is something both setups have in common. The difference is that QUIC integrates the negotiation of supported protocols and key exchange into the initial handshake. Saving more than half of the round-trips TCP+TLS handshakes would cost. So the use of certificates was never a factor. Saving round-trips and thus improving latency is.

They’ve stated in the recent town hall that improving latency is one of the things they will be working on after having already improved throughput a lot. An average car may not be broken, but wouldn’t it be better to build a faster one?
This is exactly the kind of thing where QUIC can help a lot. Especially for Storj, where every upload requires the uplink to connect to 110 different nodes per segment. That’s a lot more handshakes that need to happen compared to most other QUIC use cases. QUIC was initially designed for use on the web, where you wouldn’t usually expect to make nearly as many connections and yet was considered beneficial.

As far as I’m aware testing of this has been going on for a while now and I doubt Storj would implement it if it doesn’t provide a significant benefit in testing. I think if the majority of nodes implement UDP forwarding, they won’t have to do anything else. Long tail cancellation will take care of it and simply lead to TCP only nodes losing the race more often. However, if not enough nodes ensure UDP forwarding, they may have to resort to allowing customers to choose to only upload to nodes with UDP support to ensure best performance. Especially for smaller files.

Either way, TCP only nodes will be at a disadvantage. Whether it’s just losing more races or actually being selected for upload less frequently.

2 Likes

Except this:

https://rule11.tech/is-quic-really-quicker/

the authors tried running QUIC and TCP over the same network in different configurations, including single QUIC and TCP sessions, a single QUIC session with multiple TCP sessions, etc. In each case, they discovered that QUIC consumed about 50% of the bandwidth; if there were multiple TCP sessions, they would be starved for bandwidth when running in parallel with the QUIC session. For network folk, this means an application implemented using QUIC could well cause performance issues for other applications on the network—something to be aware of. This might mean it is best, if possible, to push QUIC-based applications into a separate virtual or physical topology with strict bandwidth controls if it causes other applications to perform poorly.

This is from the client side.

There are trade-offs everywhere. It will do no good to have clients connecting to 110 nodes which then become flooded with QUIC traffic resulting in SNOs on RPis consuming 50% of their available bandwidth…

I understand it seems I’m a bit negative on the topic of QUIC…

But I have personal experience in running a p2p storage system using QUIC through my ISP’s connection… it wasn’t good.

There isn’t much detail in that article to go on. But it sure feels like they were looking at scenarios where the connection is saturated. Storj uses only a fraction of my (and probably most SNOs) bandwidth. So that’s not really something that would concern me at this point.

Either way, if this is going to be a problem, I’m sure it will pop up in testing. So far I’ve had UDP forwarded ever since it was first mentioned and never seen any problems.

Here’s the 14 page research paper…

https://cnitarot.github.io/papers/quic_imc2017.pdf

I hope the goal of Storj is to grow its customer base and therefore its traffic and therefore the number of connections per SNO node.

Right now, I have several thousand connections with IPFS nodes worldwide.

Here’s a screenshot of my bandwidth usage on one of my IPFS gateways…

Screenshot_2022-01-30_16-04-51

If my storj nodes got that amount of sustained traffic, my crypto wallet would be quite happy every month. This is my hope for Storj…

1 Like

try clearing the browser cache, the web dashboard site might be cached and thus not load the current version with the quic status.

Sometimes after the payments were made the data on the node would not update until the node was stopped and restarted. This might be similar.

1 Like

does your IPFS profitable?

I’m wondering if they have pulled it already. I stopped my node. Deleted the docker image and re -ran the startup to pull the latest docker and I am still on 1.46.3 even after a refresh of the browser.

It depends on what you call profitable.

I run most things for the fun of it and to learn how things function. Profitability for me is measured in knowledge growth. Eventually my wallet grows as my knowledge level increases. My waist line tends to increase a little when I’m concentrating on increasing my knowledge level… However, my kids have been helping to keep my waist line smaller and my knowledge growth slower.

But…

My experience with QUIC is that it mangled my LAN and my ISP throttled my traffic.

I understand that the current QUIC traffic is uptime checks only. However, a few packets every hour is not a good test of a full roll out of QUIC for customer traffic to and from SNOs.

Hi guys, if that helps to anyone, running node from the beginning of the v3, Linux on hyper didn’t change anything. Node after update to new ver is showing “QUIC Misconfigured” For compare the second node on rasp pi is not showing that error on the newest version.

Thank you lex-1, I just stop node replace “-p 28967:28967” by “-p 28967:28967/tcp -p 28967:28967/udp” fixed the issue.

1 Like
  • started
  • TCP: dialed node in 69ms
  • TCP: pinged node in 34ms
  • TCP: total: 103ms
  • QUIC: couldn’t connect to node: rpc: quic: context deadline exceeded

beast

Great attitude, knowledge is priceless.

ISP policies vary wildly from country to country. Storj is running worldwide, so they may need to add a parameter like --NoQuic for users with limiting ISP’s.

In your case I would suggest to run your tests on a rented data center processor.

My system just updated to 1.47.3

Got the QUIC Misconfigured error…

Not fixing this one.

Sorry Storj - if it’s mandatory, I’m out.

I have 3 nodes here across two WAN links. One node updated on its own and the other two don’t even when I stop and remove the existing docker container and pull the latest one.

Very weird.

On the automated updates and updates in general, I’ve never had a problem. Watchtower updates have never failed on me in over 2 years.

I really like Storj, and am sorry to seemingly be near the exit due to QUIC. If it’s not mandatory, and there’s a fallback to TCP or whatever, then I’ll continue limping along whatever that means…

I’ll keep running until I don’t … and will set up the Polygon exchange pool as promised for as long as I have the resources to do so - even if I’m no longer running on the storj network.

2 Likes