Re: Backwards compatibility

On Fri, Mar 30, 2012 at 6:13 PM, Mark Watson <watsonm@netflix.com> wrote:

> All,
>
> I'd like to make a plea/request/suggestion that wherever possible new
> features be added incrementally to HTTP1.1, in a backwards compatible way,
> in preference to a "new protocol" approach. A "new protocol" is required
> only if it is not technically possible (or especially awkward) to add the
> feature in a backwards compatible way.
>
> The object should be to enable incremental implementation and deployment
> on a feature by feature basis, rather than all-or-nothing. HTTP1.1 has been
> rather successful and there is an immense quantity of code and systems -
> including intermediaries of various sorts - that work well with HTTP1.1. It
> should be possible to add features to that code and those systems without
> forklifting substantial amounts of it. It is better if intermediaries that
> do not support the new features cause fallback to HTTP1.1 vs the
> alternative of just blocking the new protocol. In particular, it should not
> cost a round trip to fall back to HTTP1.1. It is often lamented that the
> Internet is now the "port-80 network", but at least it is that.
>

Don't forget port 443. And I agree, it should not cost a round trip to
fallback to HTTP/1.1.


>
> Many of the features contemplated as solutions to the problems of HTTP1.1
> can be implemented this way: avoiding head-of-line blocking of responses
> just requires a request id that is dropped by intermediaries that don't
> support it and echoed on responses. Request and response header compression
> can be negotiated - again with a request flag that is just dropped by
> unsupporting intermediaries. Pipelined requests could be canceled with a
> new method. These things are responsible for most of the speed improvements
> of SPDY, I believe.
>

It's unclear to me how this would work. Are you suggesting waiting a HTTP
request/response pair to figure out if the id gets echoed, before trying to
multiplex requests? Or would you rely on HTTP pipelining as a fallback if
the ids don't get echoed? The former incurs a large latency cost. The
latter depends very much on how deployable you view pipelining on the
overall internet. I am skeptical it is sufficiently deployable and we on
Chromium are gathering numbers to answer this question (
http://crbug.com/110794). Also, pipelining is clearly inferior to
multiplexing.


> Interleaving within responses does require some kind of framing layer, but
> I'd like to learn why anything more complex than interleaving the existing
> chunked-transfer chunks is needed (this is also especially easy to undo).
>

Sorry, I'm not sure I understand what you mean by interleaving existing
chunked-transfer chunks. Are these being interleaved across different
responses (that requires framing, right?).


>
> Putting my question another way, what is the desired new feature that
> really *requires* that we break backwards compatibility with the extremely
> successful HTTP1.1 ?
>

Multiplexing, header compression, prioritization.


>
> …Mark
>
>
>
>

Received on Friday, 30 March 2012 16:30:24 UTC