Re: HTTP 1.1 --> 2.0 Upgrade

On 21.08.2012 10:43, Roberto Peon wrote:
> On Mon, Aug 20, 2012 at 2:27 PM, Willy Tarreau <w@1wt.eu> wrote:
>
>> On Mon, Aug 20, 2012 at 01:55:46PM -0700, Roberto Peon wrote:
>> >
>>
>> > For this to work, it also mandates some annoying interaction 
>> between the
>> > HTTP/1.1 code and whatever follows it.
>>
>> Yes, that's what I find a bit annoying eventhough it's not terribly
>> complex as long as we support GET requests over URIs with the same
>> headers as the first one.
>>
>> > > > Having the data in DNS could short-circuit this, but shouldn't 
>> be a
>> > > > dependency.
>> > >
>> > > The DNS requires a round trip anyway and it currently where time 
>> is
>> wasted
>> > > in mobile environments.
>> > >
>> >
>> > This is an unavoidable round-trip, though. If you don't know where 
>> to go,
>> > one must figure that out.
>>
>> Not necessarily (eg: explicit proxies). Eventhough the proxy knows 
>> it can
>> talk 2.0 with the server, how does the client know it can talk 2.0 
>> to the
>> proxy ?
>>
>
> Good question. I'd assume proxy.pac or equivalent configuration would 
> have
> to spell it out.
> In any case, the cost of getting it wrong with most explicit proxy 
> setups
> is fairly small because the proxy is generally close.
> We need to worry about avoiding additional RTTs when RTTs are big, 
> not so
> much when they're small.

.. .for values of "small" which can reach more than 20 seconds:
http://www.potaroo.net/ispcol/2012-05/notquite.html

Since a failed connection attempt is a full TCP handshake connection. 
Doing one failure for HTTP/2 and a second attempt for HTTP/1 would be 
worst-case around 45 seconds to *start* the request process.

>
>>
>> Also, you can still have /etc/hosts pre-filled with some of your 
>> operator's
>> important services (login portal or whatever helps you troubleshoot 
>> your
>> connection settings). One might even imagine that common DNS entries 
>> could
>> regularly be pushed in multicast to end users to save a few round 
>> trips. Or
>> these entries might have been advertised in HTTP response headers as 
>> you
>> once
>> explained to me :-)
>>
>
> Agreed, in the cases where you've replaced DNS, DNS is irrelevant. :)
>

+1.

>
>>
>> In fact there are plenty of ways to access services over HTTP and I 
>> tend to
>> find it akwards to increase dependency on DNS.
>>
>
> The idea as I understood it wasn't to add any dependency-- merely a 
> way of
> advertising what protocol can be spoken next during what is (for all 
> but
> those who ignore DNS). In these cases, the client would attempt the 
> new
> protocol first, likely get an error quickly, and downgrade 
> immediately.
> With a local proxy, this should not at all be costly as, well, it has 
> a
> small RTT to the user.
>

Consider the pathway scenario, which is more common:
   A->B->C->D

DNS can tell A what D's capabilities are. But A is not connecting 
there. Depending on those capability advertisements when available would 
be a costly mistake.


>
>>
>> > > Maybe this could still be done over the established connection 
>> using an
>> > > Upgrade ? However, I'd hope we avoid announcing IP and ports in
>> exchanges,
>> > > as it will cause the NAT mess we managed to avoid for years with 
>> HTTP
>> and
>> > > which made other protocols disappear.
>> >
>> > I'm not understanding how this would cause NAT messes? I'd guess 
>> that
>> most
>> > external sites would just continue to use :80 and :443.
>>
>> There should be a single reliable way of getting a host IP and a 
>> port.
>> Right
>> now it works well. If you change your /etc/hosts to access your 
>> development
>> site, you reach it without any issue and everything works well. If 
>> you want
>> to install a reverse-proxy-cache in front of it, you can have this 
>> cache
>> use the hard-coded IP address and have it work fine.

Swap "host" with "recipient" and you have the requirements lined up 
correctly. Final end-server in the chain does not matter in the 
architecture like HTTP where proxy has  existence, the host being 
connected to is all that matters. Any overlap between the two is just a 
convenience.


>>
>> At the moment you start exchanging IP addresses in the protocol, 
>> you're
>> getting
>> into trouble. Sometimes the IP is valid, sometimes it doesn't mean
>> anything and
>> must be rewritten by *someone* along the chain, but most often, this
>> *someone*
>> doesn't care or is not the same as the one which translates the 
>> address. I
>> guess
>> everyone here has already been involved in helping any admin to 
>> rewrite
>> Location
>> headers that were improperly set by a customer application that had 
>> no clue
>> about the valid host name or IP from the outside. This is typically 
>> the
>> situation we should try to avoid.
>>
>
> Everything which doesn't handle this already is significantly 
> broken--
> sometimes links have IP addresses in 'em. This is expected and valid 
> for
> the web...
>
>
>>
>> In my opinion, once an IP address and a port are known, whatever the 
>> means,
>> whether they're entered by hand in a URL bar of a browser, 
>> configured in
>> /etc/hosts, are specified in a config file as the address of a web 
>> service
>> or as a next hop for a reverse-proxy, we shouldn't care how that 
>> address
>> was
>> learned, we should be able to use HTTP/2 over it if desired.
>>
>
> Wouldn't the proxy be doing the DNS resolutions here to figure this 
> out?

*sometimes*. There is a lack of MUT NOT clause in the specs to prevent 
clients doing for example; same-origin validation.

> The client would likely have to do UPGRADE or alternate-protocol or 
> NPN to
> the proxy to figure out what it'd be using, but that'd be normal...

Exactly why, IMHO, alternatives to those are a waste of time 
considering unless they can provide the same guarantee for handling 
transit over middleware (both explicit and interceptors) safely.

SRV is a non-starter due to end-host servers not being aware of the 
details to emit for intermediaries.

I think we should be taking Upgrade: and defining ways to optimize it. 
Willy had some proposals for merging multiple requests and defining 
1.1->2.0 mapping in the headers of request #1. Figuring out how much of 
that is useful and where it could be improved would be a good way to 
progress the bootstrapping.


>>
>> > Some internal sites
>> > wouldn't, and the network there is controlled well enough that 
>> this
>> should
>> > cause no problem. In any case, sites are incented to be sure that 
>> users
>> can
>> > connect to them, and will quickly decide to simply keep using :80 
>> and
>> :443.
>> > When one has an inspecting intermediary, it probably won't like to 
>> see
>> the
>> > upgrade, but probably would be perfectly fine with a different 
>> connection
>> > (e.g. TLS on :443). There is simply less complexity there, imho.
>>
>> Possibly, but this doesn't require relying on a side-band registry 
>> to know
>> the address and ports. After all, you have enumerated the well-known 
>> ports
>> above. It could be mandated that if an http connection to :80 fails, 
>> a
>> browser
>> should automatically retry on 443 for instance, without having to 
>> deal with
>> registries such as DNS which are OK on the internet and never 
>> maintainable
>> internally, nor with IP addresses advertised in headers.
>>
>
> Agreed. I don't think anyone is yet suggesting *mandating* DNS 
> schemes...


I think we should be doing a twofold definition:
* port 80 Upgrade: from 1.x to 2.0
    - long term plans to obsolete HTTP/1.x

* port 80 Upgrade: from HTTP to HTTPS/2
    - long term plans to obsolete port 443 usage and make the http:// 
scheme TLS-enabled/secure whenever possible.


Amos

Received on Tuesday, 21 August 2012 04:24:26 UTC