W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2014

Re: Cost analysis: (was: Getting to Consensus: CONTINUATION-related issues)

From: Amos Jeffries <squid3@treenet.co.nz>
Date: Sat, 19 Jul 2014 15:10:30 +1200
Message-ID: <53C9E1A6.4020507@treenet.co.nz>
To: ietf-http-wg@w3.org
On 19/07/2014 7:37 a.m., Poul-Henning Kamp wrote:
> In message <CABkgnnWmBUNKFDH8JKz8GKRgZDaS=1f6yQ0C6CdF_zv=QnPR8A@mail.gmail.com>
> , Martin Thomson writes:
> 
>> I find that selectively trivializing various aspects of the space
>> isn't particularly constructive.
> 
> I agree.  Misinformation is also bad.
> 
>> On the one side:
>>
>> CONTINUATION has a cost in code complexity.  It defers the discovery
>> of what might be a surprisingly large amount of state.
> 
> And even though CONTINUATION in themselves do not imply or cause
> any limit to exist, all implementations are going to have limits,
> one way or another.  What these limits might be is anyones guess
> at this point, but HTTP/1 deployments might be indicative.
> 
> Reception of CONTINUATION frames carries a cost in complexity for
> memory and buffer management, independent of there being limits or
> not.
> 
> CONTINUATIONS are significantly more complext to describe in the
> draft (compare red/green in the Greg at all draft).
> 
>> On the other:
>>
>> A hard cap on size (i.e., option A) has a cost in code complexity.
> 
> I pressume you mean ... over option B) ?
> 
> If so, it will be quite the contrary:  Both senders and receivers
> will have much simpler memory management and far less state to keep
> track of with complete header-sets in a single frame.
> 
>> It requires that encoders be prepared to double their state commitment so
>> that they can roll back their header table when the cap is hit.
> 
> No, it doesn't, the encoders can just sacrifice the connection and
> open another, which will be an incredibly common implementation
> because header-sets larger than the other side is willing to accept
> are going to be incredibly rare, and primarily caused by attacks.

That connection behavour is severe overkill. The rollback argument is a
strawman.

HPACK compresssed size can be calculated as frame_length < sum(2+
header_size) where header_size is the length of an individual header in
HTTP/1.1 size, and all header values in static table redux to '1'.
Under option A senders are required to buffer up to N bytes of header
un-compressed (their choice of N), plus a buffer of same size for
compressing into.

If the un-compressed version exceeds what the local end is willing to
buffer OR the above formula output exceeds what the remote end is
willing to receive - then the frame does not need to be compressed at
all. Ergo, no rollback required.

What *is* required is that buffer space to hold the un-compressed
headers AND incompletely compressed headers simultaneously during
compression.

The solution to this appears to be allowing senders to output estimate
frames size as frame length value then pad with suffix octets up to that
if it somehow compresses smaller than estimated. Such frames could be
streamed directly to the recipient with specific size up front and no
additionanl buffer requirements.

Amos
Received on Saturday, 19 July 2014 03:11:13 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 30 March 2016 09:57:09 UTC