Re: Final DATA frames

Let's look at the efficiency cost in a bit more detail.  The 0.2%
figure is the 3 bytes per packet.  This is in the worst case scenario
when a DATA frame is included in each packet.  Certainly since we
have a *long* dynamically generated response, one could choose to
make the frames longer.  If anything, the longer the response, the
more the framing penalty is amortized.  (A 1-byte response, in
contrast, would incur 200% framing penalty.)  Of course, at some
point the packets must actually be sent out -- and this is what
dictates just how long the DATA frame would have to be.  Hence, unless
one sends one packet at a time, the 3-byte cost is spread over
several packets and 0.2% becomes that many times smaller.

The framing penalty is already small.  I think one would have
trouble even measuring the impact -- throughput improvement -- of
the proposed change.

The benefit should be weighed against the drawbacks of the proposal.
They have been presented in this thread already so I won't repeat
them.

We are to choose between

    1. reducing the risk of interoperability problems; and

    2. reducing data framing penalty.

If the magnitude of (1) and (2) is about the same (this is, of
course, subjective), then I think we should not change anything.

  - Dmitri.

On Tue, Dec 11, 2018 at 08:03:56AM -0800, Ryan Hamilton wrote:
> That's a fair question. One of the features of QUIC which is so exciting is
> the performance improvements it offers. This comes from things like 0-RTT,
> no HoL blocking, better loss recovery, etc. The former affects initial
> latency, and the others come into play in the face of packet loss. But on
> low loss links with long lived flows, different dynamics come into play.
> Comparing TCP + TLS to QUIC, for example, we see that QUIC's per-packet
> encryption overhead compares unfavorably to TLS's per-record encryption
> overhead (assuming the senders is using large records). This is true with
> Google QUIC's 12 byte AEAD hashes, and is even worse with IETF QUIC's 16
> byte overhead (but of course, the 16 byte overhead provides desirable
> security properties). All of this is to say, that for applications with low
> loss links and long lived flow, increasing QUIC overhead eats into the
> performance and threatens to tip the balance in favor of TCP. In my
> experience with 12 byte auth hashes and no DATA frame overhead, QUIC
> performance is not 10x better than TCP or 2x better than TCP. It's small
> percentages better than TCP. So a .2% increase in overhead because of DATA
> frame headers is like 10% of the gap between QUIC and TCP. This is
> significant. And when there is an extremely simple feature to avoid this
> overhead, I have a hard time thinking that overhead is justified.
> 
> On Mon, Dec 10, 2018 at 6:41 PM Dmitri Tikhonov <dtikhonov@litespeedtech.com>
> wrote:
> 
> > This change is intended to fix a purported shortcoming in the
> > HTTP/3 framing mechanism.  The claim is that
> >
> >   " DATA frame encoding is inefficient for long dynamically
> >   " generated bodies. [1]
> >
> > As shown in this thread and elsewhere, the framing overhead is
> > small.  What is the definition of "inefficient" that is used to
> > make this claim?
> >
> >   - Dmitri.
> >
> > 1. https://github.com/quicwg/base-drafts/issues/1885
> >
> > On Mon, Dec 10, 2018 at 05:55:58PM -0800, Ryan Hamilton wrote:
> > > Not all endpoints are proxies :) And some of those endpoint which are
> > > proxies are able to communicate with their backends in order to establish
> > > such knowledge. The endpoints I work with are both able to establish such
> > > knowledge, fwiw.
> > >
> > > On Mon, Dec 10, 2018 at 5:47 PM Roberto Peon <fenix@fb.com> wrote:
> > >
> > > > This simplicity of the design here is also a trap. Sure, the extension
> > is
> > > > simple, but it also makes it simple to fail to interoperate!
> > > > To be clear, only those proxies which have established some prior
> > > > knowledge that there will be no PUSH_PROMISE can safely use this
> > > > optimization with the protocol as designed today.
> > > > -=R
> > > >
> > > >
> > > >
> > > > *From: *Ryan Hamilton <rch@google.com>
> > > > *Date: *Monday, December 10, 2018 at 5:35 PM
> > > > *To: *Martin Thomson <martin.thomson@gmail.com>
> > > > *Cc: *Mike Bishop <mbishop@evequefou.be>, Dmitri Tikhonov <
> > > > dtikhonov@litespeedtech.com>, Kazuho Oku <kazuhooku@gmail.com>, Jana
> > > > Iyengar <jri.ietf@gmail.com>, Roberto Peon <fenix@fb.com>, HTTP
> > Working
> > > > Group <ietf-http-wg@w3.org>, Lucas Pardue <lucaspardue.24.7@gmail.com
> > >,
> > > > IETF QUIC WG <quic@ietf.org>
> > > > *Subject: *Re: Final DATA frames
> > > >
> > > >
> > > >
> > > > As you say, there's a valid use case here. There's nothing about this
> > > > particular design which would prevent any sort of "send the body on a
> > > > unidirectional stream" extension from being worked on or implemented.
> > That
> > > > some future extension might (or might not) be a better solution to
> > this use
> > > > case does not seem to me to be a terribly compelling argument for
> > reverting
> > > > this, particularly given the simplicity of this design.
> > > >
> > > >
> > > >
> > > > (I'll also point out that delivering the body on a unidirectional
> > stream
> > > > doesn't necessarily work will with server push, as the PUSH_PROMISE is
> > > > required to arrive before the reference to the promised resource. So
> > if the
> > > > PUSH_PROMISE happens on the main stream and the body goes on a
> > different
> > > > stream, that ordering is not guaranteed without some addition
> > properties.)
> > > >
> > > >
> > > >
> > > > On Mon, Dec 10, 2018 at 4:19 PM Martin Thomson <
> > martin.thomson@gmail.com>
> > > > wrote:
> > > >
> > > > FWIW, there is a valid use case here, but I'm not happy with the
> > > > specific design.
> > > >
> > > > For instance, an extension that used a unidirectional stream for the
> > > > body of a request might be a better option.
> > > >
> > > > On that basis, I would revert the change.
> > > >
> > > >
> >

Received on Tuesday, 11 December 2018 17:14:11 UTC