Re: Cost analysis: (was: Getting to Consensus: CONTINUATION-related issues)

On Fri, Jul 18, 2014 at 8:10 PM, Amos Jeffries <squid3@treenet.co.nz> wrote:

> On 19/07/2014 7:37 a.m., Poul-Henning Kamp wrote:
> > In message <CABkgnnWmBUNKFDH8JKz8GKRgZDaS=1f6yQ0C6CdF_zv=
> QnPR8A@mail.gmail.com>
> > , Martin Thomson writes:
> >
> >> I find that selectively trivializing various aspects of the space
> >> isn't particularly constructive.
> >
> > I agree.  Misinformation is also bad.
> >
> >> On the one side:
> >>
> >> CONTINUATION has a cost in code complexity.  It defers the discovery
> >> of what might be a surprisingly large amount of state.
> >
> > And even though CONTINUATION in themselves do not imply or cause
> > any limit to exist, all implementations are going to have limits,
> > one way or another.  What these limits might be is anyones guess
> > at this point, but HTTP/1 deployments might be indicative.
> >
> > Reception of CONTINUATION frames carries a cost in complexity for
> > memory and buffer management, independent of there being limits or
> > not.
> >
> > CONTINUATIONS are significantly more complext to describe in the
> > draft (compare red/green in the Greg at all draft).
> >
> >> On the other:
> >>
> >> A hard cap on size (i.e., option A) has a cost in code complexity.
> >
> > I pressume you mean ... over option B) ?
> >
> > If so, it will be quite the contrary:  Both senders and receivers
> > will have much simpler memory management and far less state to keep
> > track of with complete header-sets in a single frame.
> >
> >> It requires that encoders be prepared to double their state commitment
> so
> >> that they can roll back their header table when the cap is hit.
> >
> > No, it doesn't, the encoders can just sacrifice the connection and
> > open another, which will be an incredibly common implementation
> > because header-sets larger than the other side is willing to accept
> > are going to be incredibly rare, and primarily caused by attacks.
>
> That connection behavour is severe overkill. The rollback argument is a
> strawman.


> HPACK compresssed size can be calculated as frame_length < sum(2+
> header_size) where header_size is the length of an individual header in
> HTTP/1.1 size, and all header values in static table redux to '1'.
> Under option A senders are required to buffer up to N bytes of header
> un-compressed (their choice of N), plus a buffer of same size for
> compressing into.
>
>
A 'compressed' header could be larger than the original (4 times larger for
some values), or it could be much smaller.
I don't follow your calculation.


> If the un-compressed version exceeds what the local end is willing to
> buffer OR the above formula output exceeds what the remote end is
> willing to receive - then the frame does not need to be compressed at
> all. Ergo, no rollback required.
>
>
It implies that a rollback is not required if one is willing to not
compress in a large number of cases where compression would have been
highly effective.


> What *is* required is that buffer space to hold the un-compressed
> headers AND incompletely compressed headers simultaneously during
> compression.
>
>
And it requires that we destroy the value proposition of compression,
assuming we do it as suggested here, since there will be a large percentage
of the time where we'd not compress when compressing would have been
successful.



> The solution to this appears to be allowing senders to output estimate
> frames size as frame length value then pad with suffix octets up to that
> if it somehow compresses smaller than estimated. Such frames could be
> streamed directly to the recipient with specific size up front and no
> additionanl buffer requirements.
>

The whole point of compression is to send fewer bytes, and with this
proposition, we're almost always guaranteed not to do that.

-=R


>
> Amos
>
>

Received on Saturday, 19 July 2014 04:20:09 UTC