W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2012

Re: multiplexing -- don't do it

From: Roberto Peon <grmocg@gmail.com>
Date: Thu, 5 Apr 2012 01:30:08 -0700
Message-ID: <CAP+FsNfRagOhv0NH3vuUX-OB2GeRVMhX+8m9zFork+kwJz_XpA@mail.gmail.com>
To: Peter L <bizzbyster@gmail.com>
Cc: Mike Belshe <mike@belshe.com>, Patrick McManus <pmcmanus@mozilla.com>, Poul-Henning Kamp <phk@phk.freebsd.dk>, Willy Tarreau <w@1wt.eu>, "William Chan (?????????)" <willchan@chromium.org>, "Roy T. Fielding" <fielding@gbiv.com>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
On Thu, Apr 5, 2012 at 1:07 AM, Peter L <bizzbyster@gmail.com> wrote:

> Full disclosure: I work on a web accelerator for satellite that is a
> network intermediary that dynamically prefetches the objects a page will
> need prior to the browser requesting them. With this technology, satellite
> page load times go from 20+ seconds to 6 on average so it's the difference
> between usable and unusable. If the web goes primarily HTTPS or in general
> becomes opaque to intermediaries, we'll have to come up with a MITM scheme
> (for HTTPS) acceptable to our users or invent other ways to achieve
> prefetching.
>


The (very reasonable) goal is reduced and reasonable latency for your
users, correct?

I'd propose that you could install an explicit proxy, which would enable
you to do any and all of those modifications, subject to the policy demands
of the user and/or site.

Additionally, if server push or equivalent was available, the user should
be able to have the page served to them in the connection setup time plus
connection setup plus ssl handshake, plus bytes/bandwidth modulo slow start
(which can also be mitigated with a SPDY-like SETTINGS frame indicating to
the server the last CWND that it used to you).

So... lets make it so that one can utilize a proxy to do whatever
interesting stuff a proxy would like to do to aid the user (but
explicitly), and, lets also attempt to make it so that these aren't
necessary to achieve optimal or close to optimal results! These all seem
doable...

-=R



>
> On Apr 4, 2012, at 6:17 PM, Mike Belshe <mike@belshe.com> wrote:
>
>
>
> On Wed, Apr 4, 2012 at 2:09 PM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>
>> Might as well list all my complaints about SPDY in one spot:
>>
>>    1. Inter-object dependency due to using a history based compression
>>    scheme where you need to receive every object in order it was sent to
>>    decode the last header. This guarantees HOL blocking even if the transport
>>    supports out of order messages.
>>
>> What benefit are you getting by removing it which justifies a slower
> mobile experience for users?  All of the proposals for HTTP/2.0 used
> stateful, order dependent compressors.
>
>
> In the downstream direction, there is little that is dynamic in HTTP
> response headers so a simple binary scheme would get most of the benefit
> with the order dependence.
>
> Requests have cookies and Urls, which are relatively big and super
> friendly to stateful compression.
>
> That said, have you measured the performance benefit of header compression
> alone? I mean, how much slower is SPDY over mobile networks without header
> compression?
>
>
>
>>
>>    1. SPDY needs to allow headers to be readable by intermediaries. I
>>    don't mind the mostly binary approach proposed in Willy's document as
>>    parsers can be developed for wireshark and similar tools.
>>
>> SPDY does not prevent intermediaries from reading headers, but I don't
> know why you think this is a requirement.  I think you're making proxy
> scalability more important than user latency, and that is not the tradeoff
> I would make.  Speed effects all users, proxies effect a small fraction of
> users.   The argument for proxy scalability is also weak and unproven -
> compression and decompression is trivial, and not even on the radar for the
> google servers as a problem.  The CPU savings of the 6x reduction in number
> of connections completely eclipses the tiny CPU use of the compressor.
>
>
> As per above, intermediaries do more than just improving scalability
> including caching and acceleration.
>
>
>
>
>>
>>    1. HTTPS required -- to me that's a showstopper. Have to do this in
>>    two steps: deploy muxing, which removes the main performance penalty of
>>    HTTPS, and then go for security.
>>
>>
> SPDY doesn't require HTTP as specified.  So you should be happy with its
> current form.  However, I think it should require HTTPS, as there is
> tangible user benefit to securing the web today.   But I've ranted on this
> topic enough elsewhere :-)
>
> Mike
>
>
>
>
>
>>
>> Other than those, I think it's great.
>>
>> Peter
>>
>> On Wed, Apr 4, 2012 at 4:58 PM, Peter Lepeska <bizzbyster@gmail.com>wrote:
>>
>>> As long as SPDY is sent over TCP, it also suffers from HOL problems,
>>> just not as bad as pipelining.
>>>
>>> I think SPDY (or whatever the HTTP 2.0 muxing protocol is) should framed
>>> in such a way that if running over a protocol like SCTP, that solves the
>>> HOL problems, we should be able to take advantage of it. Due to gzip
>>> compression of headers, even if the transport allowed me to grab messages
>>> out of order, I'd still have to wait for all packets prior in order to
>>> decode the HTTP headers.
>>>
>>> Peter
>>>
>>>
>>> On Wed, Apr 4, 2012 at 9:07 AM, Patrick McManus <pmcmanus@mozilla.com>wrote:
>>>
>>>> On Wed, 2012-04-04 at 07:02 +0000, Poul-Henning Kamp wrote:
>>>> > In message <20120404054903.GA13883@1wt.eu>, Willy Tarreau writes:
>>>> >
>>>> > >> I'm starting to get data back, but not in a state that I'd reliably
>>>> > >> release. That said, there are very clear indicators of
>>>> intermediaries
>>>> > >> causing problems, especially when the pipeline depth exceeds 3
>>>> requests.
>>>> >
>>>> > I always thought that the problem in HTTP/1.x is that you can never
>>>> > quite be sure if there is an un-warranted entity comming after at GET,
>>>>
>>>> its not uncommon to have the consumer RST the whole TCP session when
>>>> asked to recv too far beyond the current request it is processing. For
>>>> some devices "too far" appears to be defined as "any new packet". I
>>>> presume some variation of this is where Will's data point comes from.
>>>> (Often 3 uncompressed requests fit in 1 packet).
>>>>
>>>> That class of bug sounds absurd, but its really a pretty common pattern.
>>>> As an example: hosts that fail TLS False Start (for which I understand
>>>> second hand that Chrome needs to keep a black-list), react badly because
>>>> there is TCP data queued when they are in a state that the expect their
>>>> peer to be quiet. Same pattern.
>>>>
>>>> The lesson to me is that you want to define a tight set of functionality
>>>> that is reasonably testable up front - and that's what you can depend
>>>> widely on later. Using anything beyond that demands excessive levels of
>>>> pain, complexity, and cleverness.
>>>>
>>>> (and all this pipelining talk as if it were equivalent to spdy mux is
>>>> kind of silly. Pipelining's intrinsic HOL problems are at least as bad
>>>> of an issue as the interop bugs.)
>>>>
>>>> -Patrick
>>>>
>>>>
>>>>
>>>
>>
>
Received on Thursday, 5 April 2012 08:30:42 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:59 GMT