W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: HTTP 1.1 --> 2.0 Upgrade

From: Willy Tarreau <w@1wt.eu>
Date: Tue, 21 Aug 2012 07:04:23 +0200
To: Roberto Peon <grmocg@gmail.com>
Cc: Phillip Hallam-Baker <hallam@gmail.com>, Julian Reschke <julian.reschke@gmx.de>, Yoav Nir <ynir@checkpoint.com>, "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
Message-ID: <20120821050423.GA16598@1wt.eu>
Hi Roberto,

On Mon, Aug 20, 2012 at 03:43:16PM -0700, Roberto Peon wrote:
> > > > The DNS requires a round trip anyway and it currently where time is
> > wasted
> > > > in mobile environments.
> > > >
> > >
> > > This is an unavoidable round-trip, though. If you don't know where to go,
> > > one must figure that out.
> >
> > Not necessarily (eg: explicit proxies). Eventhough the proxy knows it can
> > talk 2.0 with the server, how does the client know it can talk 2.0 to the
> > proxy ?
> >
> 
> Good question. I'd assume proxy.pac or equivalent configuration would have
> to spell it out.
> In any case, the cost of getting it wrong with most explicit proxy setups
> is fairly small because the proxy is generally close.
> We need to worry about avoiding additional RTTs when RTTs are big, not so
> much when they're small.

Not in mobile environments. The highest waste of time between your smartphone
and the net are the first DNS requests for the various subdomains required to
download a page. That's why operators are rewriting/inlining contents with IP
addresses as prefixes to fetch additional objects.

When mobile operators finally decide to advertise explicit proxies, the proxy
will be far from the mobile (say 300ms) but close to the net. There you want
your smartphone to always use v2 to talk to these proxies and let those proxies
act as they want on the net.

> > In fact there are plenty of ways to access services over HTTP and I tend to
> > find it akwards to increase dependency on DNS.
> >
> 
> The idea as I understood it wasn't to add any dependency-- merely a way of
> advertising what protocol can be spoken next during what is (for all but
> those who ignore DNS). In these cases, the client would attempt the new
> protocol first, likely get an error quickly, and downgrade immediately.
> With a local proxy, this should not at all be costly as, well, it has a
> small RTT to the user.

If you remember, this is exactly like all the conversations we had about
WS 2 years ago to save round trips in case of failures. The beauty of the
HTTP Upgrade mechanism is that it can fail softly with an automatic fallback
to HTTP/1. So as long as port 80 is in use, it is possible to always advertise
the "Upgrade: HTTP/2.0" header and hope for upgrades, regardless of any other
information. Similarly for https, you can announce HTTP/2.0 in NPN and hope
for the next hop to accept it. For alternate-protocol, I don't know (I need
to read this specific part of your spec first :-)).

There should be no valid reason not to make tentative upgrades for any site,
except if it's blacklisted. But then, only the user-agent will be able to
blacklist it, we won't expect the site's owner to advertise itself as
blacklisted.

> > At the moment you start exchanging IP addresses in the protocol, you're
> > getting
> > into trouble. Sometimes the IP is valid, sometimes it doesn't mean
> > anything and
> > must be rewritten by *someone* along the chain, but most often, this
> > *someone*
> > doesn't care or is not the same as the one which translates the address. I
> > guess
> > everyone here has already been involved in helping any admin to rewrite
> > Location
> > headers that were improperly set by a customer application that had no clue
> > about the valid host name or IP from the outside. This is typically the
> > situation we should try to avoid.
> >
> 
> Everything which doesn't handle this already is significantly broken--
> sometimes links have IP addresses in 'em. This is expected and valid for
> the web...

I'm not speaking about what is delivered in contents, because contents are
not interesting for intermediaries, but about the protocol. If you expect
a proxy or reverse-proxy to make certain use of a header value and this
header transports an address or port information, I already bet there will
be a large number of deployment issues (possibly even security issues).

> > In my opinion, once an IP address and a port are known, whatever the means,
> > whether they're entered by hand in a URL bar of a browser, configured in
> > /etc/hosts, are specified in a config file as the address of a web service
> > or as a next hop for a reverse-proxy, we shouldn't care how that address
> > was
> > learned, we should be able to use HTTP/2 over it if desired.
> >
> 
> Wouldn't the proxy be doing the DNS resolutions here to figure this out?

Typically in the examples above, there would be no DNS resolution. I don't
count the number of times I've seen IP:port in app servers config files, or
even in haproxy config file. I'm used to see Apache in reverse-proxy where
you have your /etc/hosts map the server name to the next hop too. There is
no DNS involved there either.

Another example, think about your latest set-top box you unbox and plug to
your network. The manual says "connect to http://192.168.1.100/ with your
browser". If a new auth scheme is only available to 2.0, you'd rather have
your browser automatically try 2.0 there without any DNS request.

> The client would likely have to do UPGRADE or alternate-protocol or NPN to
> the proxy to figure out what it'd be using, but that'd be normal...
> 
> 
> >
> > > Some internal sites
> > > wouldn't, and the network there is controlled well enough that this
> > should
> > > cause no problem. In any case, sites are incented to be sure that users
> > can
> > > connect to them, and will quickly decide to simply keep using :80 and
> > :443.
> > > When one has an inspecting intermediary, it probably won't like to see
> > the
> > > upgrade, but probably would be perfectly fine with a different connection
> > > (e.g. TLS on :443). There is simply less complexity there, imho.
> >
> > Possibly, but this doesn't require relying on a side-band registry to know
> > the address and ports. After all, you have enumerated the well-known ports
> > above. It could be mandated that if an http connection to :80 fails, a
> > browser
> > should automatically retry on 443 for instance, without having to deal with
> > registries such as DNS which are OK on the internet and never maintainable
> > internally, nor with IP addresses advertised in headers.
> >
> 
> Agreed. I don't think anyone is yet suggesting *mandating* DNS schemes...

No, however if we need to use it to advertise the protocol version, it means
we failed to design a proper and efficient upgrade mechanism instead. If the
mechanism is valid, there is no need for help from side-band protocols. That
is my point.

Regards,
Willy
Received on Tuesday, 21 August 2012 05:04:58 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 21 August 2012 05:05:04 GMT