W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: multiplexing -- don't do it

From: (wrong string) 陈智昌 <willchan@chromium.org>
Date: Thu, 5 Apr 2012 10:42:04 +0200
Message-ID: <CAA4WUYgWcAV1kZnY-ooSJCc7ehcNG8oLknep1au-dbhGOBN+2w@mail.gmail.com>
To: Peter L <bizzbyster@gmail.com>
Cc: Mike Belshe <mike@belshe.com>, Patrick McManus <pmcmanus@mozilla.com>, Poul-Henning Kamp <phk@phk.freebsd.dk>, Willy Tarreau <w@1wt.eu>, "Roy T. Fielding" <fielding@gbiv.com>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
On Thu, Apr 5, 2012 at 10:07 AM, Peter L <bizzbyster@gmail.com> wrote:

> Full disclosure: I work on a web accelerator for satellite that is a
> network intermediary that dynamically prefetches the objects a page will
> need prior to the browser requesting them. With this technology, satellite
> page load times go from 20+ seconds to 6 on average so it's the difference
> between usable and unusable. If the web goes primarily HTTPS or in general
> becomes opaque to intermediaries, we'll have to come up with a MITM scheme
> (for HTTPS) acceptable to our users or invent other ways to achieve
> prefetching.
>
> On Apr 4, 2012, at 6:17 PM, Mike Belshe <mike@belshe.com> wrote:
>
>
>
> On Wed, Apr 4, 2012 at 2:09 PM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>
>> Might as well list all my complaints about SPDY in one spot:
>>
>>    1. Inter-object dependency due to using a history based compression
>>    scheme where you need to receive every object in order it was sent to
>>    decode the last header. This guarantees HOL blocking even if the transport
>>    supports out of order messages.
>>
>> What benefit are you getting by removing it which justifies a slower
> mobile experience for users?  All of the proposals for HTTP/2.0 used
> stateful, order dependent compressors.
>
>
> In the downstream direction, there is little that is dynamic in HTTP
> response headers so a simple binary scheme would get most of the benefit
> with the order dependence.
>

> Requests have cookies and Urls, which are relatively big and super
> friendly to stateful compression.
>

My understanding is most clients, especially mobile, have less upload
bandwidth than download.

>

>
> That said, have you measured the performance benefit of header compression
> alone? I mean, how much slower is SPDY over mobile networks without header
> compression?
>

http://dev.chromium.org/spdy/spdy-whitepaper says "Header compression
resulted in an ~88% reduction in the size of request headers and an ~85%
reduction in the size of response headers. On the lower-bandwidth DSL link,
in which the upload link is only 375 Kbps, request header compression in
particular, led to significant page load time improvements for certain
sites (i.e. those that issued large number of resource requests). We found
a reduction of 45 - 1142 ms in page load time simply due to header
compression."

>
>
>>
>>    1. SPDY needs to allow headers to be readable by intermediaries. I
>>    don't mind the mostly binary approach proposed in Willy's document as
>>    parsers can be developed for wireshark and similar tools.
>>
>> SPDY does not prevent intermediaries from reading headers, but I don't
> know why you think this is a requirement.  I think you're making proxy
> scalability more important than user latency, and that is not the tradeoff
> I would make.  Speed effects all users, proxies effect a small fraction of
> users.   The argument for proxy scalability is also weak and unproven -
> compression and decompression is trivial, and not even on the radar for the
> google servers as a problem.  The CPU savings of the 6x reduction in number
> of connections completely eclipses the tiny CPU use of the compressor.
>
>
> As per above, intermediaries do more than just improving scalability
> including caching and acceleration.
>

As previously discussed, make them explicit then. Why do they need to be
transparent?


>
>
>
>
>>
>>    1. HTTPS required -- to me that's a showstopper. Have to do this in
>>    two steps: deploy muxing, which removes the main performance penalty of
>>    HTTPS, and then go for security.
>>
>>
> SPDY doesn't require HTTP as specified.  So you should be happy with its
> current form.  However, I think it should require HTTPS, as there is
> tangible user benefit to securing the web today.   But I've ranted on this
> topic enough elsewhere :-)
>
>
+1 on big user benefit. What's the showstopper here even if TLS is
required? The handshake or the lack of transparency to intermediaries (use
explicit intermediaries then)?


>
> Mike
>
>
>
>
>
>>
>> Other than those, I think it's great.
>>
>> Peter
>>
>> On Wed, Apr 4, 2012 at 4:58 PM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>>
>>> As long as SPDY is sent over TCP, it also suffers from HOL problems,
>>> just not as bad as pipelining.
>>>
>>> I think SPDY (or whatever the HTTP 2.0 muxing protocol is) should framed
>>> in such a way that if running over a protocol like SCTP, that solves the
>>> HOL problems, we should be able to take advantage of it. Due to gzip
>>> compression of headers, even if the transport allowed me to grab messages
>>> out of order, I'd still have to wait for all packets prior in order to
>>> decode the HTTP headers.
>>>
>>> Peter
>>>
>>>
>>> On Wed, Apr 4, 2012 at 9:07 AM, Patrick McManus <pmcmanus@mozilla.com>wrote:
>>>
>>>> On Wed, 2012-04-04 at 07:02 +0000, Poul-Henning Kamp wrote:
>>>> > In message <20120404054903.GA13883@1wt.eu>, Willy Tarreau writes:
>>>> >
>>>> > >> I'm starting to get data back, but not in a state that I'd reliably
>>>> > >> release. That said, there are very clear indicators of
>>>> intermediaries
>>>> > >> causing problems, especially when the pipeline depth exceeds 3
>>>> requests.
>>>> >
>>>> > I always thought that the problem in HTTP/1.x is that you can never
>>>> > quite be sure if there is an un-warranted entity comming after at GET,
>>>>
>>>> its not uncommon to have the consumer RST the whole TCP session when
>>>> asked to recv too far beyond the current request it is processing. For
>>>> some devices "too far" appears to be defined as "any new packet". I
>>>> presume some variation of this is where Will's data point comes from.
>>>> (Often 3 uncompressed requests fit in 1 packet).
>>>>
>>>> That class of bug sounds absurd, but its really a pretty common pattern.
>>>> As an example: hosts that fail TLS False Start (for which I understand
>>>> second hand that Chrome needs to keep a black-list), react badly because
>>>> there is TCP data queued when they are in a state that the expect their
>>>> peer to be quiet. Same pattern.
>>>>
>>>> The lesson to me is that you want to define a tight set of functionality
>>>> that is reasonably testable up front - and that's what you can depend
>>>> widely on later. Using anything beyond that demands excessive levels of
>>>> pain, complexity, and cleverness.
>>>>
>>>> (and all this pipelining talk as if it were equivalent to spdy mux is
>>>> kind of silly. Pipelining's intrinsic HOL problems are at least as bad
>>>> of an issue as the interop bugs.)
>>>>
>>>> -Patrick
>>>>
>>>>
>>>>
>>>
>>
>
Received on Thursday, 5 April 2012 08:42:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:59 GMT