W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: Re[2]: Moving forward with HTTP/2.0: proposed charter

From: James M Snell <jasnell@gmail.com>
Date: Mon, 6 Aug 2012 10:14:17 -0700
Message-ID: <CABP7RbeXJKJHspO6iHC03Wr=XwFhXo-ZOcY0Dn7hLjE5Uk98NA@mail.gmail.com>
To: Roberto Peon <grmocg@gmail.com>
Cc: Poul-Henning Kamp <phk@phk.freebsd.dk>, HTTP Working Group <ietf-http-wg@w3.org>, "Adrien W. de Croy" <adrien@qbik.com>, Mark Nottingham <mnot@mnot.net>
On Mon, Aug 6, 2012 at 9:05 AM, Roberto Peon <grmocg@gmail.com> wrote:

> That (http2 client  -> http2 proxy -> http1 server) could be beneficial
> for mobile devices in terms of both latency and battery life, so it isn't
> something we should dismiss.
Indeed. There certainly is a very reasonable case for protocol
translation... what I'm not convinced of is just how automatic and
transparent that translation needs to be considering the various use cases.

For instance, I would wager that the overwhelming majority of "normal" web
traffic -- that of browsers accessing your typical web site -- is generally
predictable and static. There are generally a known, limited set of
headers, a known set of authentication schemes, a known set of patterns
that can be easily translated to and from, at scale, at the proxy level.
These are things that we can optimize the hell out of.  For the more exotic
kinds of traffic, things that are specific to individual services or apis,
the client side developer will typically be in lock step with the server
side development. There will be older downlevel client applications that
are hard coded to use HTTP 1.1 and won't ever be changed (so even if those
are forced through an explicit proxy, they'll still be talking 1.1 end to
end). If the server side of an HTTP API still requires 1.1, there's going
to be extremely little motivation on the part of the client side developer
to change the client side to 2.0 *until the server side also changes*. In
fact, I would wager to say that client application developers simply will
NOT change their side until the server side changes. For such cases,
protocol translation is likely going to be unnecessary.

There are ways of making this more explicit tho.  If I, as a client using
HTTP/2.0 send a message to the proxy that includes a :version=1.1 header in
the request, what I'd basically be saying is that this HTTP Request
conforms to the semantics and the basic rules of HTTP 1.1, meaning that it
should be directly and easily translatable into an HTTP/1.1 message. If,
however, I include :version=2.0 in the request, I'm explicitly stating that
I'm using 2.0 semantics that may or may not be directly translatable into
an HTTP/1.1 message... in which case, unless I know I'm definitely talking
to an HTTP/2.0 server, I fully recognize and accept that things might
break. By default, Web browsers can be configured to operate with this
"compatibility mode" for a while, sending :version=1.1 by default for all
general requests; while allowing applications the option of sending

That would seem to be a perfectly reasonable approach.

- James

> -=R
> On Aug 6, 2012 12:14 AM, "Poul-Henning Kamp" <phk@phk.freebsd.dk> wrote:
>> In message <emc0d9c202-4de9-42a4-afcd-3e0714b94f4b@reboist>, "Adrien de
>> Croy" w
>> rites:
>> >>From: "Poul-Henning Kamp" <phk@phk.freebsd.dk>
>> >>
>> >>Where does 2.0<-->1.1 conversion _realistically_ come into play ?
>> >You mean where are we most likely to see 2.0 down-graded to 1.1?
>> >
>> >I think this will be extremely common for a very long time.
>> >
>> >2.0 client talks to 2.0 local proxy talking to 1.1 internet.
>> That's not a terribly interesting use-case is it ?
>> The RTT to a local proxy is not prohibitive, so running HTTP/2.0
>> on that path will gain you very little performance, and insisting
>> on running both 2.0 and 1.1 would cost very little, since the
>> link is very likely a LAN.
>> So yeah, you have a shiny new protocol, but it doesn't do anything
>> for you...
>> (The GET https://... thing can be done as extension to 1.1 also,
>> so that is not a deciding factor)
>> The RTT performance gain of 2.0 only happens once 2.0 deploys on
>> the far (ie: server) side of things.
>> So I think the interesting use-case is the opposite of what
>> you suggest: 1.1 to proxy, 2.0 to server.
>> About the only other place I see a credible case for conversion
>> is a 1.1+2.0 load-balancer and a 1.1- or 2.0- only server.
>> The 2.0->1.1 case I can conceiveably see, in a somewhat strange set
>> of circumstances, but the 1.1->2.0 case seems entirely speculative
>> for the next many years.
>> Given how much simpler it will be for everybody, I still see a
>> "same version, end to end" model as very attractive.
>> Of course 1.1 and 2.0 must still interoperate, in particular we
>> need to be able to upgrade (and later downgrade ?) on port 80.
>> But I really have a hard time seeing the a business-case for
>> specifying conversion of individual requests and responses between
>> 1.1 and 2.0.
>> --
>> Poul-Henning Kamp       | UNIX since Zilog Zeus 3.20
>> phk@FreeBSD.ORG         | TCP/IP since RFC 956
>> FreeBSD committer       | BSD since 4.3-tahoe
>> Never attribute to malice what can adequately be explained by
>> incompetence.
Received on Monday, 6 August 2012 17:15:06 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:06 UTC