W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2012

Re: Backwards compatibility

From: Mike Belshe <mike@belshe.com>
Date: Sat, 31 Mar 2012 01:46:15 +0200
Message-ID: <CABaLYCvitA0Hm82Vto6nCAON32O61AH3PCxnwgL1wTsYgb94Qg@mail.gmail.com>
To: Mark Watson <watsonm@netflix.com>
Cc: William Chan (陈智昌) <willchan@chromium.org>, "<ietf-http-wg@w3.org>" <ietf-http-wg@w3.org>
On Fri, Mar 30, 2012 at 6:53 PM, Mark Watson <watsonm@netflix.com> wrote:

>
>  On Mar 30, 2012, at 9:29 AM, William Chan (陈智昌) wrote:
>
> On Fri, Mar 30, 2012 at 6:13 PM, Mark Watson <watsonm@netflix.com> wrote:
>
>> All,
>>
>> I'd like to make a plea/request/suggestion that wherever possible new
>> features be added incrementally to HTTP1.1, in a backwards compatible way,
>> in preference to a "new protocol" approach. A "new protocol" is required
>> only if it is not technically possible (or especially awkward) to add the
>> feature in a backwards compatible way.
>>
>> The object should be to enable incremental implementation and deployment
>> on a feature by feature basis, rather than all-or-nothing. HTTP1.1 has been
>> rather successful and there is an immense quantity of code and systems -
>> including intermediaries of various sorts - that work well with HTTP1.1. It
>> should be possible to add features to that code and those systems without
>> forklifting substantial amounts of it. It is better if intermediaries that
>> do not support the new features cause fallback to HTTP1.1 vs the
>> alternative of just blocking the new protocol. In particular, it should not
>> cost a round trip to fall back to HTTP1.1. It is often lamented that the
>> Internet is now the "port-80 network", but at least it is that.
>>
>
>  Don't forget port 443. And I agree, it should not cost a round trip to
> fallback to HTTP/1.1.
>
>
>>
>> Many of the features contemplated as solutions to the problems of HTTP1.1
>> can be implemented this way: avoiding head-of-line blocking of responses
>> just requires a request id that is dropped by intermediaries that don't
>> support it and echoed on responses. Request and response header compression
>> can be negotiated - again with a request flag that is just dropped by
>> unsupporting intermediaries. Pipelined requests could be canceled with a
>> new method. These things are responsible for most of the speed improvements
>> of SPDY, I believe.
>>
>
>  It's unclear to me how this would work. Are you suggesting waiting a
> HTTP request/response pair to figure out if the id gets echoed, before
> trying to multiplex requests? Or would you rely on HTTP pipelining as a
> fallback if the ids don't get echoed?
>
>
>  Send the requests (yes, pipelined). If they come back without ids, then
> they are coming back in the order they were sent. If they come back with
> ids, then that tells you which response is which.
>

You can't do this until you've got confirmation that the server is going to
give you a HTTP/1.1 response.  It could come back HTTP/1.0.

So do we first have to do a 1.1 request successfully (with 1.1 response)
before we can ever attempt to do a pipelined upgrade?


>
>  The former incurs a large latency cost. The latter depends very much on
> how deployable you view pipelining on the overall internet.
>
>
>  It's certainly widely deployed in servers and non-transparent proxies.
> Non-supporting non-transparent proxies are easily detected. Yes, broken
> transparent proxies are a (small) problem, but you can also detect these.
>
>  I am skeptical it is sufficiently deployable and we on Chromium are
> gathering numbers to answer this question (http://crbug.com/110794).
>
>
>  Our internal figures suggest that more than 95% of users can
> successfully use pipelining. That's an average. On some ISPs the figure is
> much lower.
>

Do you a research result to cite here?  Sounds interesting.  5% failures is
pretty high.



>
>  Also, pipelining is clearly inferior to multiplexing.
>
>
>  Yes, but perhaps in practice not by much. To render a page you need all
> the objects, so from a time-to-page-load perspective it makes no difference
> how you multiplex them, as long as the link remains fully utilized. To see
> some difference you need some notion of object importance and some metric
> for 'page loaded except for the unimportant bits'. You send the most
> important requests first. Even then it's not clear that multiplexing within
> objects will perform significantly better than object by object sending.
>


Don't forget that pipelining does *not* apply to all resources.  Even when
pipelining works end-to-end, browsers need to take great care not to
accidentally pipeline a critical resource behind a slow one (like a hanging
GET).  This leads to browsers doing tricks like "only pipeline images
together" or other subsets of pipelining.

But when we consider pipelining a fallback for SPDY, this all falls apart.
 SPDY did not have these restrictions.  So now, SPDY would need to run in
some sort of degraded mode for what types of requests are pipelined just so
fallback to a HTTP/1.1 protocol that the server might not support (because
it could be HTTP/1.0) or which the user might not support because he's one
of the unlucky 5% (according to Mark's data) where pipelining just breaks
altogether.

All in all, we've now compounded 3 unique restrictions on the initial set
of requests in order to work around past bugs in order to support use of
the Upgrade header.

Realistically, you're going to get one request on the upgrade, and you'll
have to wait to open up the parallel requests.  This is a significant
restriction of the Upgrade process - it requires a round trip to kick into
the real protocol at full gear.

This is highly annoying, but for initial web page loads, probably won't be
a significant burden because the browser initially only has one URL.  For
page reloads, or validations, or subsequent pages on reconnect, it will be
a performance hit.




>
>
>
>> Interleaving within responses does require some kind of framing layer,
>> but I'd like to learn why anything more complex than interleaving the
>> existing chunked-transfer chunks is needed (this is also especially easy to
>> undo).
>>
>
>  Sorry, I'm not sure I understand what you mean by interleaving existing
> chunked-transfer chunks. Are these being interleaved across different
> responses (that requires framing, right?).
>
>
>  Interleaving data from multiple responses requires some kind of framing,
> yes. Chunked transfer encoding is a kind of framing that is already
> supported by HTTP. Allowing chunks to be associated with different
> responses would be a simple change. Maybe it feels like a hack ? That was
> my question: why isn't a small enhancement to the existing framing
> sufficient ?
>
>
Even if you could hack it into a chunk, thats a real jumbled mess.  Why do
you want to do this?  It doesn't give you backward compatibility in any way
(existing browsers won't know what to do with these nonstandard chunks
anyway), its just a mess for the sake of a mess?

>
>
>>
>> Putting my question another way, what is the desired new feature that
>> really *requires* that we break backwards compatibility with the extremely
>> successful HTTP1.1 ?
>>
>
>  Multiplexing,
>
>
>  See my question above
>
>  header compression,
>
>
>  Easily negotiated: an indicator in the first request indicates that the
> client supports it. If that indicator survives to the server, the server
> can start compressing response headers right away. If the client receives a
> compressed response it can start compressing future requests on that
> connection. It's important that this indicator be one which is dropped by
> intermediaries that don't support compression.
>
>  prioritization.
>
>
>  I think you mean "re-priortization". I can send requests in priority
> order - what I can't do is change that order to response to user actions.
> How big a deal is this, vs closing the connection and re-issuing
> outstanding requests in the new order ?
>

Its the difference between web pages rendering faster or slower.    Load up
100 image requests on your twitter page, and then fetch the images before
the JS.  The page loads slower unless you lower the priority of the images.
 But you still don't want to add serialization delays that HTTP has.

BTW - the effects of priorities has been measured, and you're welcome to
use the existing benchmarking harness to verify yourself that these things
are true in real code rather than just theory.  (see dev.chromium.org/spdy).
 I wish I had published the tests when I did this long ago - spent a lot of
time on it.

Mike



>
>  …Mark
>
>
>
>>
>> …Mark
>>
>>
>>
>>
>
>
Received on Friday, 30 March 2012 23:46:44 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:57 GMT