W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2021

Re: Prioritizing QUIC DATAGRAMs (was: Re: [Masque] Prioritizing HTTP DATAGRAMs)

From: Patrick Meenan <patmeenan@gmail.com>
Date: Wed, 23 Jun 2021 09:10:58 -0400
Message-ID: <CAJV+MGz=sszxnUn-oSrGbd_az7QPATLB_3VeaHmC4R1Gj0ua8g@mail.gmail.com>
To: Lucas Pardue <lucaspardue.24.7@gmail.com>
Cc: Samuel Hurst <samuelh@rd.bbc.co.uk>, Spencer Dawkins at IETF <spencerdawkins.ietf@gmail.com>, QUIC WG <quic@ietf.org>, David Schinazi <dschinazi.ietf@gmail.com>, Kazuho Oku <kazuhooku@gmail.com>, HTTP Working Group <ietf-http-wg@w3.org>, MASQUE <masque@ietf.org>
Using multiple connections should be strictly worse than streams sharing a
connection, otherwise we probably have a gap that needs to be filled.
Multiple connections kicks the can down to the congestion control
algorithms for each of the separate connections to play nicely with each
other and hopefully result in a decent experience. With a single connection
we should have better knowledge over the overall transport and make better
decisions.

That said, I'd hate for us to end up going down the same road that we did
for HTTP/2 and designing a complex scheme for something without practical
experience deploying it first. Is there enough flexibility in the
extensions to allow for an entity that controls both ends of the tunnel to
do experimentation before evolving a standard (on the tunnel side of
things)?

There are a lot of moving parts that likely need specs and I'm not sure if
this is the right place to sort through all of them but this is sort of the
mental model I have of the problem space:

Client Application (A) <===> LAN Aggregation (B) <------------------------>
WAN Aggregation (C) <======> Remote Application (D)

A & B can be:
- within the same application (browser using data reduction proxy or IP
anonymization).
- on the same device (software VPN, Cloudflare Warp, etc)
- on the same LAN (hardware VPN, ISP internal tunneling, eero->some Amazon
service, etc)

I'm assuming C <=> D is plain IP on the Internet with no assumptions of the
software at D and no expectations that D will be able to influence the
prioritization from it's side directly (maybe by application-specific info
sent to A to adjust from the client side).

I think most of this discussion is around the B <-> C connection where the
traffic is tunneled over QUIC (and that's usually the constrained link
anyway). It's assuming B has some knowledge of how to prioritize the
tunneled traffic. In a lot of early cases, the same company likely controls
B & C and should be able to experiment. There are cases where the B/C
tunnel should use a standard (allowing Chrome to select IP anonymization
services from multiple providers for example) but if we have prioritization
as an optional extension that interops well, that case could fall back to
unprioritized until we have enough field experience to see what works well
before standardizing it.

Where it feels like we have a gap is between A & B. Within an application
we can build proprietary interfaces but once you're at a machine or network
level I don't think there's a SOCKS-level equivalent for QOS so it mostly
comes down to inferring which isn't great.

"Prioritization" might be a bit overloaded when we move beyond simple
request/response streams. Game data, RTC, video streaming have different
characteristics where latency/jitter can be critical up to a minimum
threshold but you might not want to give the stream highest priority for it
to do whatever it wants with (a 720p RTC video stream while allowing a
parallel 4K video stream and software download is more useful than a 4K RTC
video stream with the video rebuffering for example). I don't know if that
level of specificity would help more than basic prioritization schemes
though (except starvation is more of a problem in the tunneld case).

To me, it feels like where we are today is roughly:
- make sure MASQUE allows for extensions for someone who controls A, B & C
to experiment with prioritization
- make sure it is interoperable with a C endpoint that has not implemented
QOS/Prioritization
- wait for experience and field data from experiments with MASQUE
QOS/prioritization before standardizing something

On Wed, Jun 23, 2021 at 7:05 AM Lucas Pardue <lucaspardue.24.7@gmail.com>
wrote:

> Hey Sam,
>
> On Wed, 23 Jun 2021, 09:36 Samuel Hurst, <samuelh@rd.bbc.co.uk> wrote:
>
>>
>> The primary issue that I can see with this is that it potentially leads
>> to the inability to use DATAGRAMs for multiple application protocols in
>> an interoperable way. While it's fully possible for one application to
>> de/multiplex it's own traffic on the DATAGRAM channel, multiple
>> applications sharing the same tunnel might have different (and
>> incompatible) ideas on how to use an identifier that multiplexes
>> traffic, or may use different mechanisms entirely.
>>
>
> Right. But having an identifier in the transport doesn't help much here.
> Streams have such an identifier (split into 4 types) and don't solve the
> problem you describe IIUC.
>
> To give an example, an application mapping like HTTP/3 makes use of 3
> types, using the up to the entirety of the values permitted. There isn't
> currently a way for multiple completely independent HTTP/3 connections to
> share a QUIC connection. HTTP/3 connection coalescing achieves sharing to
> some degree, it relies on the demultiplexing occurring in the HTTP
> messages. Similarly, CONNECT and CONNECT-UDP methods use HTTP to signal the
> intent of streams or HTTP/3 datagrams, and applications can build context
> state around this to decide how to dispatch received data.
>
>
>> It's this danger of lack of interoperability that I don't like. I don't
>> like the idea of having to write lots of application notes saying "you
>> can do A but not with B, but B and C can coexist", which leads to
>> applications exploding when someone does something unexpected with
>> option D down the line, that I didn't foresee.
>>
>> Of course, I don't know exactly how much call there is for doing this.
>> For example, with regards to my RTP Tunnelling draft [1] that Spencer
>> linked to above I haven't encountered a need to run something alongside
>> RTP/RTCP in DATAGRAMs yet, but that's not to say that it isn't a
>> possibility.
>>
>
> The challenge is that if you want to mix usage intents for a shared
> transport resource, you probably need some way to signal that. Since QUIC
> delegates using streams and DATAGRAMs to upper layers applications, it's
> very tricky to design anything at the lower layer to accommodate this.
>
> One approach to allow general purpose indiscriminate resource sharing
> would be to design a virtualization layer that sits just below actual
> mappings. Such a thing would fool upper layers into thinking they had
> access to native QUIC. The layer of indirection would probably require a
> "hypervisor" to manage things properly. MASQUE is vaguely an instantion of
> this design that relies on HTTP. WebTransport is vaguely another example,
> we used to have a QuicTransport protocol but pivoted away from it back to
> HTTP.
>
> The potential solution space is large. However, I'm not sure the problem
> space you describe is broad enough to activate building something else. It
> would be interesting to hear otherwise. At the end of the day, there needs
> to be strong motivation for building complexity and/or runtime overhead in
> order to share QUIC connection resources. Otherwise, isn't it just easier
> to use multiple separate connections?
>
>
>>
>> I'm certainly interested in some form of prioritisation for my RTP
>> Tunnelling draft [1], as protocols like RTP run in real-time and other
>> things getting in the way can easily cause poor quality of service. This
>> could be by making the application use longer receive buffers than
>> necessarily to ensure smooth audio and video playback at the receiver,
>> or random pauses and glitches in the stream.
>>
>
> Makes sense. Would you design that into your RTP tunneling mapping?
>
> Cheers
> Lucas
>
>>
>>
Received on Wednesday, 23 June 2021 13:11:31 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 23 June 2021 13:11:35 UTC