QUIC Misconfigured

Hi, I configured my node today, I am online but QUIC stays ‘misconfigured’, the port is open, I added rules to my firewall etc. How long it takes for brand new node to loose this status if all is configured ok? Also, how to restart Node from power shell? What’s the command?

It takes 1 node restart und 1 browser refresh. Your node will still work with misconfigured QUIC, do not worry too much and take your time to fix it.

elevated Powershell:

stop-service storagenode
start-service storagenode
2 Likes

AFAIK, Storj’s QUIC is only implemented in satellite check-ins with the storj node. So, the traffic is nearly zero.

First:

Any discussion of QUIC implemented for the p2p data traffic needs to include a discussion about consumer level ISPs and how many of them throttle UDP traffic to prevent a UDP Flood Attack. Since QUIC encrypts everything, including the headers, there’s no ability to do QoS through third parties. This means that any high traffic WAN data network is going to trigger consumer level ISP’s DDoS sensors… and throttle the UDP packets coming through to an SNO.

To implement QUIC without taking into account the very real threat of UDP throttling by ISPs is simply irresponsible as a project.

Second:

IPFS has a list of providers of content, called a DHT. Earlier versions of Storj had something similar and decided to go with the Satellite model instead in v3. The files in IPFS may be stored anywhere on the network and may be of any size. I’m no longer sure of the file sizes in Storj, but I’m quite sure retrieving files from IPFS will include a sizable sampling of Storj sized files… Whew! that’s a more than a byte :slight_smile:

Third:

I personally, quite literally, experienced my ISP throttling UDP traffic to my WAN accessible IPFS nodes. Disabling QUIC on WAN accessible nodes did not reduce my traffic to those nodes, it simply stopped triggering the ISP sensors yelling at my IP address due to UDP Flooding.

Furthermore, my standard Internet traffic slowed considerably when running WAN accessible IPFS nodes with QUIC enabled.

So…

One could make all sorts of claims about the advantages of QUIC from one “side” or the “other”… but ISPs aren’t going to know that the incoming UDP packet Flood contains QUIC traffic… they are simply going to pull the plug on your connection.

1 Like

If you’re running Storj in docker it seems that the configure command has changed.

docker run -d --restart unless-stopped --stop-timeout 300 -p 28967:28967/tcp -p 28967:28967/udp -p 127.0.0.1:14002:14002 -e WALLET=“0xXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX” -e EMAIL="user@example.com" -e ADDRESS=“domain.ddns.net:28967” -e STORAGE=“2TB” --mount type=bind,source="",destination=/app/identity --mount type=bind,source="",destination=/app/config --name storagenode storjlabs/storagenode:latest

So to fix it I would imagine you have to rebuild the container. Which is what I’m currently trying to remember how to do.

EDIT: Well I did it, and I updated the ports on the router directly and in the firewall option on my computer and it’s still Borked so I dunno

EDIT2: Nailed it. Okay so if you are using a Synology NAS you need to go into the ports there and set up a UDP Condition for it to work, along with all of the other conditions.

1 Like

With regards to other possible problems with new QUIC stat in dashboard, I have noticed that both my nodes report same UDP opened, though in configuration I have inputted 28967:28967/udp and 28968:28967/udp correspondingly. They both report 28967 as opened port…

I’m not sure why you think they would just implement it for the hell of it. Obviously if this is going to cause issues they would notice in testing and would not implement this for customers.

I’ve been looking for information on UDP throttling by ISP’s, but I’m finding nothing but forum posts of people being suspicious about it. No really in depth info on the common practices around this. Now of course ISPs are probably not that open about it, but the amount of results I’m finding doesn’t really suggest this is a very wide spread problem at all. In fact my search has many more results about throttling specific types of traffic like video streaming. The only more specific UDP throttling results I have found are about comcast (are you by chance a comcast customer?).

Either way “I quit because you use UDP” isn’t going to make Storj change their mind. Showing what UDP did to your node almost certainly will though.

The big difference is that every file uploaded to Storj is split up into max 64MB segment, each segment is then split up into 110 pieces and uploaded to individual nodes (only 80 have to finish). So any 640MB file on IPFS would have a single connection, but on Storj it would have 1100. But even for single segment files you have a minimum of 110 connection attempts for uploads and 39 for downloads. This is clearly a massive difference between Storj and IPFS. And as a result the largest pieces you will every receive on your node are around 2.3MB.

So yeah, I don’t discount your experience at all. But as I previously said, not all QUIC or UDP implementations are made equal. And what if Storj’s traffic doesn’t rise to the level that triggers your ISPs flood protection? I would want to know before jumping to conclusions.

It reports the port the node software is listening to inside the container. Which for docker setups is always 28967, unless you’ve deviated from setup instructions. That is quite confusing though… perhaps they could use the port in the external address instead.

1 Like

I noticed this as well, so I opened an issue on Github.

If the tooltip is going to report anything, it should report the external UDP port since the internal container port is irrelevant in this context.

4 Likes

Just checked. You are right. For the docker setup it shows only for available port (from the config.yaml). In that case it will be always 28967

On Windows node, where I have changed the listening port, it’s showed correctly

1 Like

I didn’t write that…

I’ve posted several links in various threads to scientific papers showing:

  1. QUIC might not be quick.
  2. QUIC does not play nice on the same network as TCP.
  3. QUIC specifications change rapidly and are poorly documented.

I’ve also posted a link to Cloudflare’s policy on UDP traffic in their network. And what a UDP Flood Attack does…

I’ve posted my own experiences running a p2p server application that has numerous QUIC connections and what that did to my ISP and LAN.

I would like a reasonable and detailed explanation in written form which is easily consumed (preferably in PDF that I can print out and read while not connected to any device) that shows:

  1. Why QUIC might be more suitable for Storj.
  2. The test scenarios used to determine that a given SNO will not be negatively affected vs. TCP.
  3. How Storj plans on addressing UDP throttling such as described by Cloudflare in their UDP policy.
  4. How Storj plans on avoiding any sudden code changes in QUIC that may cause unknown problems in the future.

TCP is an industry standard. The days of proprietary Microsoft implementations are long gone… However, QUIC is purely a Google product. There is no guarantee whatsoever of a stable code base or even a suitable code base into even the immediate future.

I haven’t watched the townhall… and really don’t have time to sit through a video presentation.

I do have plenty of time to read documentation. So, I would greatly appreciate if someone can point me to the documentation that addresses my concerns.

1 Like

I’m not going to quote the many times you said you would quit if they made it mandatory.
You never said you would quit if Storj implementation would cause problems. You said mandatory UDP is exit.

I’ve seen all the things you’ve posted. I also know QUIC is being quickly adopted all over the place. That can’t be happening if there is no benefit and lots of problems. So apparently it’s working for lots of implementations. All I’m saying is I don’t understand the blanket assumption that the implementation will be problematic.

You’re showing your cards a bit here… I’m guessing you’re not a fan of Google. But it really doesn’t matter. This isn’t internet explorer. And QUIC has long not been a Google thing. It’s been adopted as a standard by IETF first in RFC 8999 and later RFC 9000. Wide adoption is happening far outside of Google’s scope. 7.5% of websites use it at this point and adoption is speeding up, not slowing down. Usage Statistics of QUIC for Websites, January 2024

Edit: Whoops should have included HTTP/3 there as that’s over QUIC as well and is used by 24.7% of websites. Usage Statistics of HTTP/3 for Websites, January 2024

If you hate QUIC this much, you’re going to have a bad time…

I wrote…

  1. QUIC caused problems with my ISP and my LAN.
  2. I will not run QUIC on my network.

Then I detailed why these two are the case with me.

If someone reads through the technical papers and can address the concerns both I and those papers raise, then I’d be happy to stop asking for more information.

Vendor lock-in is a real thing whether it’s Microsoft, Google, or some random vendor for a component in a physical system. Once upon a time, one of my jobs was to ensure that the products deployed in my line of work were not susceptible to vendor lock-in. And if the product was only produced by a single vendor, I needed to ensure that all stake holders understood the situation and agreed with the decision… Before deployment of the solution.

My feelings about Google as a company have nothing whatsoever to do with my statements on QUIC.

If QUIC had a non-Google implementation which was standardized and supported by some non-profit or something like that… that would address vendor lock-in. However, as it is, Google could very easily decide to drop QUIC as a product. It’s happened with plenty of Google products over the years.

With ANY project, it’s a terrible idea to replace a standardized and widely implemented from many vendors mission critical component with a non-standardized single vendor product.

If you read the paper I linked to several posts above, you’ll see that number in there as well… It’s almost entirely Google traffic.

There are many such standards that either have never been implemented or were implemented for a short while by a single vendor. The important point is avoiding vendor lock-in…

So, if Google drops QUIC as a product, what happens to the Storj network when a 0-day is found in that last implementation that Storj uses in 2025…

1 Like

Yeah it is. However it doesn’t remotely apply here. Just because it was initially designed by Google doesn’t mean there is vendor lock in. This is now an IETF standard and being developed by them.

Google QUIC (gQUIC)
The protocol that was created by Google and taken to the IETF under the name QUIC (already in 2012 around QUIC version 20) is quite different from the QUIC that has continued to evolve and be refined within the IETF. The original Google QUIC was designed to be a general purpose protocol, though it was initially deployed as a protocol to support HTTP(S) in Chromium. The current evolution of the IETF QUIC protocol is a general purpose transport protocol. Chromium developers continued to track the evolution of IETF QUIC’s standardization efforts to adopt and fully comply with the most recent internet standards for QUIC in Chromium.

Just have a glance at all the different implementations listed on Wikipedia which are all based on the IETF QUIC protocol.

How is there vendor lock in when it has been defined as a standard, has many different open source implementations and is adopted by pretty much all major browsers and many servers already?

Like I said, this isn’t Internet Explorer. Google handed it over to a standards body to implement it. And they decided to make it the new standard for the web with HTTP/3. It’s pretty much the furthest from vendor lock in you could get.

It’s not a Google product, if they drop it, nothing changes for anyone else using it.

Well one of the lovely maintainers of the go implementation that Storj uses would probably take something like that quite seriously. Pretty sure they would be on top of fixing that. They’re also not Google employees, so why would they care what Google does? GitHub - quic-go/quic-go: A QUIC implementation in pure go

And otherwise, there’s always one of the more than 800 forks who can take over. That’s kind of the point with open source. And if not that, Stork could switch to another implementation altogether. That’s kind of the point of open standards… So… Yeah… There’s all that.

As I’ve written in posts above, I’ve been looking at QUIC starting a couple of years ago…

But now, I see Microsoft has MSQUIC…

https://techcommunity.microsoft.com/t5/networking-blog/what-s-quic/ba-p/2683367

So, vendor lock-in satisfied.

It’d be nice to have someone point to MSQUIC and say, “Here’s MSQUIC” rather than attempt to side-step the concern raised.

However, if you read the new Microsoft MSQUIC info, you will indeed find EXACTLY what I’m concerned about and what I found myself… and reported on in this forum.

It’s also very useful to remember that QUIC is typically considered in a very large server to many clients model. That’s not the Storj Network model.

As I’ve written much earlier…

QUIC with a fallback to TCP would be fine. I even made a voting topic on that very premise.

I literally pointed you to the list of implementations on wiki which includes msquic among many others. Didn’t really make sense to list them all here. But if you want me to point to another, LiteSpeed QUIC has also seen wide adoption and is among the most popular implementations.

I do understand the concerns for traffic shaping by ISPs. But these things don’t happen in a vacuum and the excerpt you quoted already mentions this is likely to change in the future.

I agree. Especially with nodes running on rpi hardware. But with a bit of luck we’ll see a return of the massive amounts of test traffic Storj pushed a while ago to really test the impacts of nodes. (I believe I made over $400 on a node that stored less than 4TB of data in one month at the time, so yes please :slight_smile: )At the time we found out that SMR nodes weren’t up for the task. Maybe with QUIC load testing we’ll find that some nodes or networks won’t be able to keep up with the loads. But we all learn and Storj can implement mitigations and the node community can find recommendations for nodes or networks that can’t keep up. And if it really does overload a significant amount of nodes, there is no way Storj can even make it mandatory. So if they ever do, I’m pretty sure it won’t be a problem for the vast majority of node operators.

1 Like

Actually… on second thought…

Many individual implementations may not be desirable either unless there is some guaranteed common feature set.

I’ll have to read through the specifications.

However, none of the implementations meet the requirements of my ISP which has already rate limited my connection when attempting field the heavy UDP traffic through QUIC enabled IPFS servers.

Why not simply have a fallback to TCP? If QUIC is really quicker, than the QUIC enabled nodes will gain market share over the data pieces anyway. If QUIC isn’t really quicker, the TCP nodes will still function fine.

IPFS comes with QUIC enabled out of the box. Disabling QUIC is something one has to figure out… That’s a good model in my opinion.

That would be the IETF QUIC standard as described in RFC 9000. As long as the standard is followed they should all interoperate.

I think that is by far the most likely scenario to play out. But I can imagine a scenario where QUIC does help, but let’s say only 30% of node operators enable it (keep in mind existing node operators need to take action to make that happen). In that case, customers would attempt 110 QUIC connections, find out that 77 don’t support it and then have to fall back to TCP. The 33 QUIC connections won’t be enough to upload the 80 required pieces and you end up waiting on the TCP connections anyway. Except now those take even longer because a QUIC connection was attempted first as well.
Perhaps you can work around that last part by having the satellite track whether a node supports QUIC and telling the uplink this info. But either way you have to wait for TCP.

Now of you would allow customers to select QUIC only nodes, they could have the full speed advantage without having to deal with TCP. That would most likely be plenty to convince most node operators to adopt it.

But simply not selecting nodes that don’t support QUIC anymore for uploads is a much simpler implementation and a much stronger incentive. Kind of like how they deal with outdated versions.

Who knows what they will end up doing, I doubt that last scenario is the most preferred on anyone’s list, but I couldn’t rule it out.

1 Like

Yes I tryed to use 2 rules but it forced me to an other port with the warning "For this port, are an other port assigned externally than you wanted… IPv6 is no Problem. And I also don´t have an option to select TCP & UDP at the same time

It seems that we agree then…

Odd. It didn’t seem that way before.

:beer:

3 Likes

I think we latched on to the small parts we disagreed on. :laughing:
Anyway, it was an interesting conversation.
:beers:

4 Likes

This was a long conversation! Seems like it wrapped but I just wanted to offer some additional assurance that I think the QUIC thing is a good plan. I’ll start by reiterating some things @BrightSilence said:

  • As we roll out QUIC, we intend to leave support for TCP. To add on to that, we have no current plans to remove support for TCP. That of course may change based on how the QUIC rollout goes, but before we can even investigate if QUIC seems reasonable, we need nodes to enable it, hence the new and improved indicator.
  • We want to do these tests to understand exactly how widespread ISP throttling of UDP, etc. is. If ISPs are horrible and by and large break QUIC, obviously we won’t continue! That said, the existing adoption of QUIC in web browsers gives me some comfort that the situation might not be so bad these days in practice.
  • We are not going to switch to something that seems worse for SNOs or clients.

Thanks @BrightSilence!

To add a bit more color, in the last conversation about QUIC with @anon27637763, I discussed congestion control. I want to bring that up again because it’s worth pointing out that our explicit interest in QUIC is to be able to deviate from what’s out of the box. We do not intend to use the standard QUIC congestion controller Google provides in the long term (though we do in the short term). The reason we’re interested in QUIC is precisely because the application code will own details about flow control and we want to tune it. We simply cannot do that with TCP, as the operating system owns that logic. As @BrightSilence said, QUIC will help us reduce round trip latency by eliminating some TCP+SSL handshake back and forth, but we could probably get pretty far by tuning TCP and SSL if that’s all we cared about. What we’re interested in with QUIC is that it (or any other session oriented protocol on top of UDP) allows us to break away from what QUIC or TCP do by default. We get to change the default behavior!

If this went over anyone’s head, here’s a quick background summary. The internet functions on “IP packets”, which are basically post cards that have a source IP and a destination IP on it. When you deposit an IP packet onto your network card, the network card sends it towards the next hop in its routing table and then forgets about it. That’s the end of its job! Maybe that packet will get there or not, who knows in what order it will arrive with other IP packets.

That’s why TCP was developed. TCP adds packet acknowledgement, ordering, port numbers to identify which process, and so on, on top of IP packets. TCP is a pretty cool protocol that changes IP’s post card-like behavior into more reliable telephone conversations. Unfortunately, it is baked into your operating system, and so if there are any behaviors you want to change about it in any way (as a multiplatform software developer for users who don’t want to install kernel modules), you’re out of luck.

Understanding not every process might want TCP forced on it, UDP was added to the TCP/IP suite [1], which, to be honest, isn’t really anything more than an escape hatch. UDP is just IP packets with a port number. UDP is a way to give a specific program, instead of your operating system, the ability to directly send IP packets (but with a port number).

You could absolutely build TCP on top of UDP… which is what QUIC is. QUIC is a little more complex in that it interweaves TLS/SSL as well, chooses different default behaviors than TCP, but that’s the high level picture. QUIC is like an attempt to build TCP again, but based on things we’ve learned since the 1970s, and the application implements the TCP algorithm/protocol instead of your operating system.

Once we at Storj get QUIC working at all, I believe our intention is to evaluate swapping QUIC’s congestion controller out for one based on LEDBAT (what Bittorrent uses) or something similar.

Ultimately, Storj is in a relatively unique position. TCP (and default-QUIC) are optimized for a handful of streams competing for bandwidth at most (and I agree with @anon27637763 that they don’t always compete fairly), whereas that’s not even the position we’re in. We commonly multiplex thousands concurrently from a single client. There are performance and efficiency gains we are leaving on the table by using congestion controllers that aren’t aware of this. The standard QUIC implementations won’t solve this problem for us, but by using an application-layer flow control such as QUIC, it opens the door to allow us to consider making and tuning these application-side optimizations down the road. Yes, we do intend to make fundamental changes to what you get out of the box with QUIC.

For SNOs, I expect the initial QUIC rollout won’t do much. For clients, I hope it cuts down on first byte latency. I believe our initial tests will be to have clients simply try dialing QUIC and TCP concurrently and just choosing what’s faster. If QUIC does start doing well there, SNOs that enable QUIC might win more uploads in the upload long tail cancelation race. But the real gains I expect to see once we start to get to the new opportunities enabled by being able to tune more things.

Ultimately we’ll most certainly always need to support TCP. But, despite points raised in this thread, we do continue to believe that adding support for something besides TCP (maybe that starts with QUIC) will bring a better network experience to everyone. Because of industry backing, QUIC is an easy next step, and likely to be better treated by ISPs than if we rolled our own thing. :slight_smile:

@anon27637763 - in the previous thread you turned down a request to help debug why you were concerned about UDP. That’s fine and you’re welcome to take that position of course. Thanks for everything else you do to be a dedicated SNO! That said, I don’t want to discourage others from helping us make sure this works well (if it’s possible for this to work well). It will require testing, and so we appreciate everyone’s willingness to enable QUIC!

[1] does anyone here remember fumbling with IPX/SPX or NetBIOS? That’s my alternate history short story I would write. Everything is the same except IPX/SPX won and TCP/IP is the thing you fondly remember trying to figure out for LAN parties. And maybe we’re still crimping coax.

16 Likes