W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2013

Re: disabling header compression

From: Peter Lepeska <bizzbyster@gmail.com>
Date: Fri, 13 Dec 2013 15:46:11 -0500
Message-ID: <CANmPAYFpSqAP53yJRNZcXCQPHHtw7GeOpZeiL4z-ZYnNnuMMzw@mail.gmail.com>
To: James M Snell <jasnell@gmail.com>
Cc: Martin Thomson <martin.thomson@gmail.com>, Patrick McManus <pmcmanus@mozilla.com>, HTTP Working Group <ietf-http-wg@w3.org>
Correction: Domain sharding specifically increased the impact of DNS
lookups on page load times.

On Fri, Dec 13, 2013 at 3:44 PM, Peter Lepeska <bizzbyster@gmail.com> wrote:
> Patrick said: "Now clients send a burst of 30 parallel requests in the
> same cwnd on the first rtt (the real value of the compression)"
>
> I agree. This is actually the most compelling reason to do header
> compression -- to send as many GET requests as possible on a single
> cwnd. And my job is to make the web faster so I'm all about
> performance gains, which this certainly is, but since we're designing
> a protocol it does feel a little hackish. It reminds me of domain
> sharding, which was a great idea until everyone realized that it
> increased the impact of TCP slow start. I wonder if instead we should
> be pushing for increasing initcwnd on more than just server operating
> systems. Or using multiple TCP connections for our HTTP/2 sessions. Or
> making the protocol itself leaner in the case when multiple GETs are
> issued to the same web server. Anyway, adding header compression
> primarily for what is essentially one narrow (and possibly short term)
> case of a bad fit between HTTP and TCP feels like a potential detour,
> especially since trading cpu for lower bandwidth usage makes less and
> less sense as bandwidth gets cheaper and more abundant.
>
> On an unrelated note I appreciate that adding enable/disable to
> SETTINGS is complicated by the fact that the browser sends GETs before
> receiving the server's SETTINGS frame and I would not want the browser
> to wait. After thinking about it it seems like what Roberto suggested
> is basically the best we can do if it is not part of protocol
> negotiation and we don't want the browser to wait:
>
> "If you don't want to handle compression you:
> 1) Send a SETTINGS frame requesting that the receive side-context be
> sized to zero.
> 2) RST all streams that would store state in the compression context
> until you receive the acknowledgement that this has occurred from the
> remote side.
> 3) Proceed normally to handle stuff with zero context size."
>
> I'm fine with this approach, though it's a little obscure. It'd be
> nice if there was simply a "don't compress" button, but I get why that
> is now.
>
> Thanks,
>
> Peter
>
> On Fri, Dec 13, 2013 at 1:48 PM, James M Snell <jasnell@gmail.com> wrote:
>> Personally, I feel that if we have header compression in the protocol,
>> it shouldn't be optional. However, it does need to be recognized that
>> hpack is a brand new mechanism that *does* add significant new
>> complexity to the protocol, *does* drive up implementation cost, *is*
>> still largely unproven (saying "trust us our security guys say it's
>> ok" doesn't really count as "proof"), and has so far only been
>> implemented by a handful of people who appear to view the use of HTTP
>> through only very narrow Web browser centric glasses. You cannot sweep
>> these issues under the rug by shrugging and saying "trust us". Greater
>> care ought to be taken when adopting such significant new features
>> (and requirements). It would be interesting to get a gauge of just how
>> much consensus there is in the WG for adopting hpack as *the* header
>> compression mechanism for http/2.
>>
>> On the question of adoption. Let me pose this question: what benefits,
>> if any, does adoption of HTTP/2 offer to developers of HTTP-based
>> RESTful APIs? What significant problems does HTTP/2 address that would
>> justify the implementation costs? (Or is this another, "well they can
>> just keep using HTTP/1" answer?)
>>
>> - James
>>
>>
>> On Fri, Dec 13, 2013 at 10:28 AM, Martin Thomson
>> <martin.thomson@gmail.com> wrote:
>>> On 13 December 2013 08:45, Patrick McManus <pmcmanus@mozilla.com> wrote:
>>>> this is all well trodden territory. HTTP/1 has taught us the folly of
>>>> optional features. (gzip requests, trailers, etc..)
>>>>
>>>> Figure out what is important, define it, and then use it widely. HTTP/2 has
>>>> done a pretty decent job of that so far with really just one significant
>>>> exception (and that has a decent reason (flow control)).
>>>
>>> I think that there's a simple out for someone who is somehow unwilling
>>> to implement a decompressor, set the context to zero, and send
>>> RST_STREAM on stuff that relies on the header table.  That will work
>>> perfectly well at the cost of a round trip.
>>>
>>> I'd rather that those implementations take that hit than every
>>> implementation.  As Patrick says: game it out and you will see that
>>> making it optional creates some perverse incentives.
>>>
>>> (We made push optional, but that's because we don't have the same
>>> clear indication that this is valuable.  Even there, the decision
>>> wasn't certain.)
>>>
Received on Friday, 13 December 2013 20:46:38 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:20 UTC