Re: HTTP 1.1 --> 2.0 Upgrade

On Mon, Aug 20, 2012 at 01:55:46PM -0700, Roberto Peon wrote:
> > As we suggested in our network-friendly draft, its possible to send
> > multiple
> > GET requests at once during the handshake by passing the list of URIs in a
> > dedicated header. It basically allows us not to waste the round trip.
> >
> 
> I agree that this can work, assuming all the URLs fit in the headers (many
> servers/proxies have a maximum limit on header size to reduce attack
> surface) and no other headers are required.

Agreed but Apache's 8kB per header tends to be around the common usage
nowadays. And if we have to push 8kB worth of request URIs, we'll often
hit the initcwnd anyway.

> Having seen some incredible URLs and headers (e.g. cookies), I'm wary of
> assuming this is reasonable.

Agreed but quite frankly, what we're trying to achieve is a way to make
the web fast for everyone, provided there is no deliberate server-side
sabotage :-)

> For this to work, it also mandates some annoying interaction between the
> HTTP/1.1 code and whatever follows it.

Yes, that's what I find a bit annoying eventhough it's not terribly
complex as long as we support GET requests over URIs with the same
headers as the first one.

> > > Having the data in DNS could short-circuit this, but shouldn't be a
> > > dependency.
> >
> > The DNS requires a round trip anyway and it currently where time is wasted
> > in mobile environments.
> >
> 
> This is an unavoidable round-trip, though. If you don't know where to go,
> one must figure that out.

Not necessarily (eg: explicit proxies). Eventhough the proxy knows it can
talk 2.0 with the server, how does the client know it can talk 2.0 to the
proxy ?

Also, you can still have /etc/hosts pre-filled with some of your operator's
important services (login portal or whatever helps you troubleshoot your
connection settings). One might even imagine that common DNS entries could
regularly be pushed in multicast to end users to save a few round trips. Or
these entries might have been advertised in HTTP response headers as you once
explained to me :-)

In fact there are plenty of ways to access services over HTTP and I tend to
find it akwards to increase dependency on DNS.

> > Maybe this could still be done over the established connection using an
> > Upgrade ? However, I'd hope we avoid announcing IP and ports in exchanges,
> > as it will cause the NAT mess we managed to avoid for years with HTTP and
> > which made other protocols disappear.
> 
> I'm not understanding how this would cause NAT messes? I'd guess that most
> external sites would just continue to use :80 and :443.

There should be a single reliable way of getting a host IP and a port. Right
now it works well. If you change your /etc/hosts to access your development
site, you reach it without any issue and everything works well. If you want
to install a reverse-proxy-cache in front of it, you can have this cache
use the hard-coded IP address and have it work fine.

At the moment you start exchanging IP addresses in the protocol, you're getting
into trouble. Sometimes the IP is valid, sometimes it doesn't mean anything and
must be rewritten by *someone* along the chain, but most often, this *someone*
doesn't care or is not the same as the one which translates the address. I guess
everyone here has already been involved in helping any admin to rewrite Location
headers that were improperly set by a customer application that had no clue
about the valid host name or IP from the outside. This is typically the
situation we should try to avoid.

In my opinion, once an IP address and a port are known, whatever the means,
whether they're entered by hand in a URL bar of a browser, configured in
/etc/hosts, are specified in a config file as the address of a web service
or as a next hop for a reverse-proxy, we shouldn't care how that address was
learned, we should be able to use HTTP/2 over it if desired.

> Some internal sites
> wouldn't, and the network there is controlled well enough that this should
> cause no problem. In any case, sites are incented to be sure that users can
> connect to them, and will quickly decide to simply keep using :80 and :443.
> When one has an inspecting intermediary, it probably won't like to see the
> upgrade, but probably would be perfectly fine with a different connection
> (e.g. TLS on :443). There is simply less complexity there, imho.

Possibly, but this doesn't require relying on a side-band registry to know
the address and ports. After all, you have enumerated the well-known ports
above. It could be mandated that if an http connection to :80 fails, a browser
should automatically retry on 443 for instance, without having to deal with
registries such as DNS which are OK on the internet and never maintainable
internally, nor with IP addresses advertised in headers.

Regards,
Willy

Received on Monday, 20 August 2012 21:27:48 UTC