Why QUIC is not a replacement for TCP • The registry

systems approach Some might say that there is a possibility that QUIC will start replacing TCP. This week, I want to argue that QUIC actually solves a different problem than what TCP solves, and therefore should be viewed as something other than a TCP replacement.

It may well be that QUIC will become the default transport for some (or even most) applications, but I believe that’s because TCP has been pushed into roles it wasn’t originally intended for. Let’s take a step back to see why I’m making this claim.

In 1995, Larry Peterson and I were working on the first issue of Computer Networks: A Systems Approachand we had reached the point where we were writing the chapter on transport protocols, which we titled “End-to-End Protocols.”

Back then, there were only two notable transport protocols on the Internet, UDP and TCP, so we gave each one its own section. Since our book aims to teach network principles and not just the content of RFCs, we framed the two sections as two different communication paradigms: a simple demultiplexing service (exemplified by UDP) and a reliable byte stream (TCP).

But there was a third paradigm that Larry said we needed to cover that didn’t really have a well-known Internet protocol example: Remote Procedure Call (RPC). The examples we used to illustrate RPC in 1995 seem curious now: SunRPC and a self-made example from Larry’s X kernel research at the time. Today there are many options for RPC implementations running over IP, with gRPC being one of the most well-known examples.

The examples we used to illustrate RPC in 1995 seem odd now

Why did we feel the need to write an entire section on RPC when most other networking books would only have covered TCP and UDP? For one thing, RPC was one of the most important areas of research in the distributed systems community at the time, with the 1984 publication by Nelson and Birrell spurring a generation of RPC-related projects. And in our view, a reliable byte stream is not the right abstraction for RPC.

At the heart of RPC is a request/reply paradigm. You send a set of arguments from the client to the server, the server does some calculations on those arguments, and then returns the results of the calculation. Yes, a reliable byte stream could help get all arguments and results across the network correctly, but RPC is more than that.

Aside from the problem of serializing arguments for transmission over a network (which we also cover later in this book), RPC isn’t really about transmitting a stream of bytes, it’s about sending a message and getting a response to it receive. So it’s a bit more like a datagram service (as provided by UDP or IP), but it also requires more than just unreliable datagram delivery.

RPC has to deal with lost, out-of-order and duplicate messages; an identifier space is required to match requests and responses; and message fragmentation/reassembly must be supported, to name just a few requirements. Also desirable for RPC is out-of-order delivery, which prevents a reliable byte stream. There may be a reason why so many RPC frameworks emerged in the 1980s and 1990s—people with distributed systems needed an RPC mechanism, and nothing in the standard TCP/IP protocol suite was readily available. (RFC 1045 does define an experimental RPC-oriented transport, but it never seems to have caught on.) Nor was it obvious then that TCP/IP would become as dominant as it is today. Therefore, some RPC frameworks (e.g. DCE) were designed to be independent of the underlying network protocols.

The lack of support for RPC in the TCP/IP stack prepared the ground for QUIC.

When HTTP came out in the early 1990s, it didn’t try to solve an RPC problem so much as it did an information exchange problem, but it implemented request/response semantics. HTTP’s designers, obviously lacking better options, chose to run HTTP over TCP, with notoriously poor performance in the early versions, since a new connection was used for each “GET”.

A variety of improvements to HTTP such as pipelining, persistent connections, and the use of parallel connections were introduced to improve performance, but TCP’s reliable byte stream model was never perfectly suited to HTTP.

With the advent of Transport Layer Security (TLS), which caused yet another round-trip exchange of cryptographic information, the disconnect between what HTTP requires and what TCP provides became increasingly apparent. This was well explained in Jim Roskind’s 2012 QUIC design document: head-of-line blocking, poor congestion response, and the extra RTT(s) introduced by TLS were all identified as problems affecting the execution of HTTP over TCP are inherent.

One way to put into words what happened here is this: The “slim waist” of the internet was originally just the internet protocol, designed to support a variety of protocols over it. But somehow the “waist” started to include TCP and UDP as well. Those were the only means of transportation available. If you only want datagram service, you can use UDP. If you needed any type of reliable delivery, TCP was the answer. If you needed something that couldn’t be mapped to either unreliable datagrams or reliable byte streams, you were out of luck. But it was asking a lot of TCP to be everything to so many upper-layer protocols.

QUIC does a lot of work: its definition includes three RFCs covering the basic protocol (RFC 9000), its use of TLS (9001), and its congestion control mechanisms (9002). But at its core, it’s an implementation of the missing third paradigm for the Internet: RPC.

If you really want a reliable byte stream, e.g. B. if you’re downloading that multi-gigabyte OS update, then TCP is really well designed for the task. But HTTP(S) is much more like RPC than a reliable stream of bytes, and QUIC can be viewed as finally providing the RPC paradigm for the Internet protocol suite.

This will certainly benefit applications running over HTTP(S), especially gRPC and all the RESTful APIs we’ve come to rely on.

When we previously wrote about QUIC we found it to be a good case study of how to reconsider layering a system as requirements become clearer. The point here is that TCP satisfies a set of requirements—that of a reliable stream of bytes—and its congestion control algorithms evolve in the service of those requirements.

QUIC really meets different requirements. Since HTTP is so central to the Internet today – in fact, it has been argued (here and here) that it will become the new “slim waist” – it could be that QUIC will become the dominant transport protocol, not because it exactly replaces TCP, but because it meets the requirements of the dominant applications above. ®

https://www.theregister.com/2022/10/07/quic_tcp_replacement/ Why QUIC is not a replacement for TCP • The registry

Rick Schindler

World Time Todays is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@worldtimetodays.com. The content will be deleted within 24 hours.

Related Articles

Back to top button