W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2012

Re: #385: HTTP2 Upgrade / Negotiation

From: Mark Nottingham <mnot@mnot.net>
Date: Wed, 24 Oct 2012 18:25:57 +1100
Cc: Amos Jeffries <squid3@treenet.co.nz>, ietf-http-wg@w3.org
Message-Id: <F1D1DFCD-0851-49F6-90E6-0EB6C53D6B8A@mnot.net>
To: Willy Tarreau <w@1wt.eu>

On 24/10/2012, at 6:10 PM, Willy Tarreau <w@1wt.eu> wrote:

>> A fair number of folks has said they want this, and no one is saying that it's mandatory.
> 
> Not being mandatory still means that browsers will have to decide what to
> do based on this record at one point, with lists of exceptions.

Yes -- but I'm hearing (so far) from the browser folks that they're willing to do this.


>> AFAICT most of your problems can be boiled down to:
>> 
>> 1) Different servers may back up one DNS record - in which case, they don't have to use SRV
>> 
>> 2) Clients assuming HTTP/2 may fail - in which case, we need to design things so they can recognise this and fall back to 1.1 seamlessly.
> 
> While this seems to be a prerequisite to me, it does not sound like the
> solution to everything : failing fast is what we all hope for, but we
> know well that there are environments where the failure will cause delays
> etc... (you're well aware of the issues faced by pipelining for example).
> So similarly we'll have some boxes drop packets or simply not forward data
> when they discover junk after the request headers.
> 
> DNS records are just for global view of how the server is presented to the
> net, but it does not take into account the whole path to the clients.
> 
> I'd suggest that the zero-roundtrip upgrade is made only on TLS using the
> NPN extension that already works well with SPDY. The advantage is that it
> says what protocol can be spoken on top of a connection and does not rely
> on external components unaware of the path.

So, I'd put it this way:

The DNS-based upgrade is optimistic; it will be fast as long as there isn't an unknown middlebox present; if there is, it'll be slower to start. The question is how often that will happen, and how detectable / variable it is.

On the other hand, the Upgrade-based negotiation is pessimistic; it assume that something will go wrong until we prove that both ends speak the same language. It'll be fast but have some limitations at the start (as discussed).

Ideally, we'd have exactly one way to upgrade for HTTP URIs. However, that *may* not be possible -- if enough people's needs aren't met by the Upgrade path, other mechanisms for doing it will be developed, and if that's going to happen, I'd like it to be interoperable, and ideally part of the spec itself.

I do hear what you're saying, though. We'll need more data to make hard decisions; at this point I'm just trying to figure out what high-level approaches we're going to pursue in earnest. 


> And for clear text we'd keep the good old HTTP upgrade which should be
> optimal for the most common usages (GET/HEAD first) and require that
> other methods would be handled as 1.1 during the first round trip. I
> don't see this as a terrible tradeoff.


At this (early) point it looks like we're going that way, yes. The question is whether we have an optimistic path as well.

Cheers,

--
Mark Nottingham   http://www.mnot.net/
Received on Wednesday, 24 October 2012 07:26:14 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 24 October 2012 07:26:16 GMT