W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: multiplexing -- don't do it

From: Mike Belshe <mike@belshe.com>
Date: Wed, 4 Apr 2012 15:17:23 -0700
Message-ID: <CABaLYCv40zWbgmBf3wnrgza9cE5Fhspgyu=26h7BWLTgZod9Jw@mail.gmail.com>
To: Peter Lepeska <bizzbyster@gmail.com>
Cc: Patrick McManus <pmcmanus@mozilla.com>, Poul-Henning Kamp <phk@phk.freebsd.dk>, Willy Tarreau <w@1wt.eu>, "William Chan (?????????)" <willchan@chromium.org>, "Roy T. Fielding" <fielding@gbiv.com>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
On Wed, Apr 4, 2012 at 2:09 PM, Peter Lepeska <bizzbyster@gmail.com> wrote:

> Might as well list all my complaints about SPDY in one spot:
>
>    1. Inter-object dependency due to using a history based compression
>    scheme where you need to receive every object in order it was sent to
>    decode the last header. This guarantees HOL blocking even if the transport
>    supports out of order messages.
>
> What benefit are you getting by removing it which justifies a slower
mobile experience for users?  All of the proposals for HTTP/2.0 used
stateful, order dependent compressors.


>
>    1. SPDY needs to allow headers to be readable by intermediaries. I
>    don't mind the mostly binary approach proposed in Willy's document as
>    parsers can be developed for wireshark and similar tools.
>
> SPDY does not prevent intermediaries from reading headers, but I don't
know why you think this is a requirement.  I think you're making proxy
scalability more important than user latency, and that is not the tradeoff
I would make.  Speed effects all users, proxies effect a small fraction of
users.   The argument for proxy scalability is also weak and unproven -
compression and decompression is trivial, and not even on the radar for the
google servers as a problem.  The CPU savings of the 6x reduction in number
of connections completely eclipses the tiny CPU use of the compressor.



>
>    1. HTTPS required -- to me that's a showstopper. Have to do this in
>    two steps: deploy muxing, which removes the main performance penalty of
>    HTTPS, and then go for security.
>
>
SPDY doesn't require HTTP as specified.  So you should be happy with its
current form.  However, I think it should require HTTPS, as there is
tangible user benefit to securing the web today.   But I've ranted on this
topic enough elsewhere :-)

Mike





>
> Other than those, I think it's great.
>
> Peter
>
> On Wed, Apr 4, 2012 at 4:58 PM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>
>> As long as SPDY is sent over TCP, it also suffers from HOL problems, just
>> not as bad as pipelining.
>>
>> I think SPDY (or whatever the HTTP 2.0 muxing protocol is) should framed
>> in such a way that if running over a protocol like SCTP, that solves the
>> HOL problems, we should be able to take advantage of it. Due to gzip
>> compression of headers, even if the transport allowed me to grab messages
>> out of order, I'd still have to wait for all packets prior in order to
>> decode the HTTP headers.
>>
>> Peter
>>
>>
>> On Wed, Apr 4, 2012 at 9:07 AM, Patrick McManus <pmcmanus@mozilla.com>wrote:
>>
>>> On Wed, 2012-04-04 at 07:02 +0000, Poul-Henning Kamp wrote:
>>> > In message <20120404054903.GA13883@1wt.eu>, Willy Tarreau writes:
>>> >
>>> > >> I'm starting to get data back, but not in a state that I'd reliably
>>> > >> release. That said, there are very clear indicators of
>>> intermediaries
>>> > >> causing problems, especially when the pipeline depth exceeds 3
>>> requests.
>>> >
>>> > I always thought that the problem in HTTP/1.x is that you can never
>>> > quite be sure if there is an un-warranted entity comming after at GET,
>>>
>>> its not uncommon to have the consumer RST the whole TCP session when
>>> asked to recv too far beyond the current request it is processing. For
>>> some devices "too far" appears to be defined as "any new packet". I
>>> presume some variation of this is where Will's data point comes from.
>>> (Often 3 uncompressed requests fit in 1 packet).
>>>
>>> That class of bug sounds absurd, but its really a pretty common pattern.
>>> As an example: hosts that fail TLS False Start (for which I understand
>>> second hand that Chrome needs to keep a black-list), react badly because
>>> there is TCP data queued when they are in a state that the expect their
>>> peer to be quiet. Same pattern.
>>>
>>> The lesson to me is that you want to define a tight set of functionality
>>> that is reasonably testable up front - and that's what you can depend
>>> widely on later. Using anything beyond that demands excessive levels of
>>> pain, complexity, and cleverness.
>>>
>>> (and all this pipelining talk as if it were equivalent to spdy mux is
>>> kind of silly. Pipelining's intrinsic HOL problems are at least as bad
>>> of an issue as the interop bugs.)
>>>
>>> -Patrick
>>>
>>>
>>>
>>
>
Received on Wednesday, 4 April 2012 22:17:52 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:59 GMT