Re: Header Size? Was: Our Schedule

Note that the shared-state problem is an issue with any compressor that
shares state (it is almost definitional! :) ). The majority (if not all)
compressors that would act on a per-frame basis offer limited benefit (e.g.
huffman, which one can do today), or have potentially poor security
properties.

We have made many design concessions on behalf of proxies. I've driven
many, if not most of them, as I've acutely felt the pain of operating
proxies at large scale for years.. This particular design point (that of
sharing one context per connection) was chosen because it cost/complexity
of having multiple contexts was not assured to be outweighed by the
benefit. The original, discarded (for the aforementioned reason), design
had stream-groups which defined which compression and flow control context
into which a stream would be binned.

Alternately said, the issue is less a matter of HPACK specifically, and
more a matter of how many compression contexts we wish to manage. The
optimal thing for proxies is generally one context per server endpoint
(e.g. per origin), or one context per client endpoint, but the complexity
cost is higher.

-=R




On Fri, May 30, 2014 at 1:51 PM, Greg Wilkins <gregw@intalio.com> wrote:

>
> But Proxies will already work badly with the shared hpack table.  If the
> different streams have significantly different header sets, then they will
> be forever resetting the table or worse replacing each header one by one
> (as I doubt many encoders will have the look ahead logic to guess if reset
> or replace is best).
>
> So a single table on channel 0 will probably not be that different.   At
> least a single static table on 0 would allow streams to be processed in
> parallel and/or out of order.
>
> But I do think the proxy case is important and we should well support
> aggregating streams.  hpack does not do that now, but a channel 0 mechanism
> could be augmented with header set versions, so each stream would just
> refer to which version it referred to.  This would well support proxies and
> also allow arbitrary reordering and parallel execution of streams.
>
> In short, I think that considering proxies is an argument against hpack
> rather than for it.
>
> cheers
>
>
>
>
>
>
>
>
>
>
>
> On 30 May 2014 20:13, Roberto Peon <grmocg@gmail.com> wrote:
>
>> What he said :)
>>
>>
>> On Fri, May 30, 2014 at 9:43 AM, Michael Sweet <msweet@apple.com> wrote:
>>
>>> Richard,
>>>
>>> I'm thinking from the proxy to the upstream server.  If you reuse the
>>> same upstream connection for multiple downstream clients, it is quite
>>> likely that they will not have the same User-Agent or other headers...
>>>
>>>
>>> On May 30, 2014, at 12:06 PM, Richard Wheeldon (rwheeldo) <
>>> rwheeldo@cisco.com> wrote:
>>>
>>>  Speaking as a proxy developer, I like the idea of putting common
>>> header stuff onto frame 0. Common identity-specific stuff (user-agent) can
>>> easily be shared. We already do similar things -(re-using state from
>>> earlier requests on a keep alive connection. It’s a big win over re-sending,
>>>
>>>
>>>
>>> Richard
>>>
>>>
>>>
>>> *From:* Michael Sweet [mailto:msweet@apple.com <msweet@apple.com>]
>>> *Sent:* 30 May 2014 09:14
>>> *To:* Greg Wilkins
>>> *Cc:* Matthew Kerwin; "Martin J. Dürst"; David Krauss; Martin Thomson;
>>> Richard Wheeldon (rwheeldo); HTTP Working Group
>>> *Subject:* Re: Header Size? Was: Our Schedule
>>>
>>>
>>>
>>> Greg,
>>>
>>>
>>>
>>> I don't see shared state like that working for proxies.
>>>
>>>
>>>
>>>
>>>
>>> On May 30, 2014, at 4:22 AM, Greg Wilkins <gregw@intalio.com> wrote:
>>>
>>>
>>>
>>>    Matthew,
>>>
>>> firstly I'm sure there are forms of header compression that can have a
>>> shared state table that are not so highly order dependent.  The problem
>>> with hpack is that every field in every frame of stream can mutate the
>>> shared state table.  This gives us a really hard serialisation problem, so
>>> that setting the table size to zero does not help, as you still have to
>>> prevent interleaving so you can decode in order and watch for an increase
>>> in the table size.
>>>
>>> I think we could get a lot of benefit from a compression scheme that
>>> uses header frames transmitted on channel 0 to set the shared state.   All
>>> the user-agent guff could then be sent once and only once and all the
>>> stream header decoding would then be read only (and thus could happen in
>>> any order).    If you wanted to put cookies into the shared table, then
>>> there are still some ordering issues, but not as hard as the current ones
>>> and with lots of potential solutions (eg table versions or multiple table
>>> ids etc.).
>>>
>>> To Martins idea,  that would help in some respects if the subsequent
>>> frames are excluded from hpack.  This would let us allow interleaving.
>>> However, it does not prevent the server from needing to hold onto large
>>> header tables during request handling.  So we still should include headers
>>> in the flow control, so the receiver can say "stop already!" when large
>>> headers are being sent.
>>>
>>> cheers
>>>
>>>
>>>
>>>
>>>
>>> On 30 May 2014 09:34, Matthew Kerwin <matthew@kerwin.net.au> wrote:
>>>
>>> On 30 May 2014 16:51, "Martin J. Dürst" <duerst@it.aoyama.ac.jp> wrote:
>>>
>>> This is just a thought:
>>>
>>> Would it be possible to allow arbitrarily large amounts of header data
>>> (either via continuations or via multiple header frames), but to limit
>>> compression to a single header frame.
>>>
>>> While in general, there is a stronger need to compress larger stuff,
>>> such a solution could come with various benefits:
>>> - Simplified compression (less/no state)
>>> - Keep the main benefit (quick start)
>>> - Penalty against large amounts of header data
>>>   (because that's not the way to do things anyway)
>>>
>>> Regards,   Martin.
>>>
>>>
>>>
>>>
>>>
>>> If you send SETTINGS_HEADER_TABLE_SIZE=0 and a HEADERS with [0x30,0x20]
>>> in the first block fragment you effectively disable the context, and are
>>> left with only Huffman coding (which has a per-frame context).
>>>
>>>
>>>
>>> As Roberto reminded me yesterday, the thing about a header block is that
>>> when it ends, you get everything else in the reference set (carried over
>>> from the previous header block). The biggest gain in HPACK compression
>>> comes from not actually sending identical headers again and again, which
>>> means not only sharing context between multiple frames, but between frames
>>> from multiple streams. I don't know if, in practice, any per-frame
>>> compression scheme would come close to HPACK's connection-based delta
>>> compression, and that would be a big hit to the protocol's appeal.
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>>   Matthew Kerwin
>>>   http://matthew.kerwin.net.au/
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Greg Wilkins <gregw@intalio.com>
>>> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that
>>> scales
>>> http://www.webtide.com  advice and support for jetty and cometd.
>>>
>>>
>>>
>>> _________________________________________________________
>>> Michael Sweet, Senior Printing System Engineer, PWG Chair
>>>
>>>
>>>
>>>
>>>  _________________________________________________________
>>> Michael Sweet, Senior Printing System Engineer, PWG Chair
>>>
>>>
>>
>
>
> --
> Greg Wilkins <gregw@intalio.com>
> http://eclipse.org/jetty HTTP, SPDY, Websocket server and client that
> scales
> http://www.webtide.com  advice and support for jetty and cometd.
>

Received on Friday, 30 May 2014 22:07:15 UTC