Re: HTTP router point-of-view concerns

A fair bit of this quantitative analysis was published with the SPDY
whitepapers.

Yes, packets matter.
Yes, RTT matters most.
number of packets is highly correlated with bytes on the wire

The encoders/compressors here were developed because:
1) any stream compressor is subject to the CRIME attack
2) gzip uses more memory/cpu than these more specific schemes
3) the encoders/compressors here were developed with an eye towards
allowing intermediaries to have much more control over the size and cost of
the compression stuff

The upstream path is often very limited.
If we want to have server push or similar be competitive with inlining, we
need the cost of that metadata to be low.
-=R


On Thu, Jul 11, 2013 at 12:51 PM, Sam Pullara <spullara@gmail.com> wrote:

> It would be great to have a quantitative analysis of the benefit we can
> expect to get on various types of links and header sets so we could compare
> various implementations. I'm unconvinced these solutions are much better
> for real requests than gzip with an initial dictionary. Also, isn't bytes
> on the wire the wrong metric? Aren't these slow links much more sensitive
> to the number of packets / round trips?
>
> Sam
>
> On Jul 11, 2013, at 12:37 PM, Roberto Peon <grmocg@gmail.com> wrote:
>
> If one doesn't care about number of bytes on the wire, or if one doesn't
> care about user-perceived latency, then obviously compression is a waste.
> If one does care, then, especially on slower links, header compression
> does a great deal to reduce latency as the HTTP metadata eats up a
> significant fraction of available bandwidth on those links.
>
> -=R
>
>
> On Thu, Jul 11, 2013 at 10:21 AM, Sam Pullara <spullara@gmail.com> wrote:
>
>> How sure are we that the entire idea of header compression isn't a bad
>> idea? I implemented something similar in the WebLogic T3 protocol
>> (BubblingAbbrevTable, probably still in there) and it was mostly just a
>> pain. If I were to go back I would just use gzip with some agreed upon seed
>> dictionary. Thought I would bring this up since it seems like it is a very
>> controversial feature to begin with.
>>
>> Sam
>>
>> On Jul 11, 2013, at 10:14 AM, James M Snell <jasnell@gmail.com> wrote:
>>
>> > Yes, the ability to set compression context size to 0 is very useful.
>> > My fears around this area are:
>> >
>> > 1. In order to achieve maximum throughput, Intermediaries may opt to
>> > *always* set compression context to 0, forcing the headers to always
>> > be passed as Literals, killing the utility of having the header
>> > compression mechanism there in the first place.
>> >
>> > 2. The assumption of a non-zero default compression context size when
>> > the connection is established opens a race condition that a malicious
>> > sender could exploit in a denial of service attack. Yes, the receiver
>> > could opt to terminate the connection once it detects bad behavior,
>> > but there is still a potential window of time there where the receiver
>> > could be forced to do significant additional work.
>> >
>> >  (This is particularly bad given that header continuations are
>> unbounded.)
>> >
>> > 3. Setting the compression context size to 0 does not stop the sender
>> > from sending the Indexed Literal instructions anyway. The receiving
>> > endpoint would still be required to process those instructions even if
>> > the data is not actually being indexed, causing CPU cycles to be
>> > consumed. For any individual block of headers it may not be a
>> > significant load, but it's something that needs to be addressed.
>> >
>> >  (This can be fixed in the spec by stating that any attempt to Index
>> > any individual (name,value) whose size is greater than the available
>> > header table size results in a Compression Error. Making this change
>> > would mean that when Compression Context size is 0, the only operation
>> > that would not result in an error is Literal without Indexing. This
>> > was discussed on the list but as far as I can tell it's not yet
>> > captured in the spec).
>> >
>> > 4. The fact that header continuations can be unbounded is deeply
>> > troubling, especially given that the endpoint is required to buffer
>> > and process the complete header block (well.. that's only half true,
>> > the encoding does allow for incremental processing of the HEADERS
>> > frame payloads but the spec requires that the complete header block is
>> > always processed). Sure, the recipient is free to terminate the
>> > connection as soon as it detects bad behavior, but the sender could
>> > end up forcing the recipient to do a significant amount of extra
>> > processing with a never ending sequence of HEADERS frames. Smart
>> > implementations will know how to deal with this, yes, but overall it
>> > adds to the already growing list of "New Complex Things" that an
>> > HTTP/2 implementer needs to know about.
>> >
>> >  (In the implementation I've done, I provide a configuration
>> > parameter that allows a developer to cap the number of the
>> > continuations and the total size of the header block)
>> >
>> > I know that we're in "implementation" phase right now and that
>> > everyone is busy getting their code ready for testing in August, but
>> > after updating my implementation to the latest version of the draft,
>> > my concerns with regards to stateful header compression definitely
>> > remain.
>> >
>> > On Thu, Jul 11, 2013 at 9:36 AM, Martin Thomson
>> > <martin.thomson@gmail.com> wrote:
>> >> On 10 July 2013 21:20, Amos Jeffries <squid3@treenet.co.nz> wrote:
>> >>> It seems not to be negotiable from the recipients side.
>> >>
>> >> Compression context size = 0 is entirely negotiable from the recipient
>> >> end, with a small wrinkle, that I know some folks are working on.
>> >> Which is, a client can start using a default compression context size
>> >> prior to learning that a server has no space (substitute intermediary
>> >> as appropriate there).
>> >>
>> >
>>
>>
>>
>
>

Received on Thursday, 11 July 2013 19:57:23 UTC