W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: Optimizations vs Functionality vs Architecture

From: Amos Jeffries <squid3@treenet.co.nz>
Date: Wed, 22 Aug 2012 14:43:31 +1200
To: <ietf-http-wg@w3.org>
Message-ID: <17ce4099aa96be45437aeac0d168af11@treenet.co.nz>
On 22.08.2012 08:47, Yoav Nir wrote:
> On Aug 21, 2012, at 10:14 PM, Poul-Henning Kamp wrote:
>>> We should if it's possible. Suppose HTTP/2.0 looks much like the 
>>> SPDY draft.
>>> How can you ever get a current HTTP/1 server to reply to this?
>>
>> That's why I've been saying from the start that SPDY was an 
>> interesting
>> prototype, and now we should throw it away, and start from scratch, 
>> being
>> better informed by what SPDY taught us.
>
> A requirement for downgrade creates too many restrictions, even if we
> throw SPDY away. The beginning of a 2.0 connection would have to look
> enough like 1.x so as to fool existing servers.

Quite the opposite IMO. We want the 2.0 services to be able to detect 
the difference and auto-downgrade to 1.x, but we want the 1.x services 
to discard 2.0 attempts cleanly without corrupting the connection 
framing.

* An Upgrade: header amidst a 1.x frame allows this seamlessly. At cost 
of bandwidth on first request.

* A probe OPTIONS/SETTINGS query ahead of the main traffic pipeline 
allow sthis seamlessly. At cost of RTT plus some bandwidth. (ie worse 
than Upgrade:)

* a Happy-Eyeballs double TCP connection allows this on first request. 
At cost of doubling the TCP network load. In the presence of IPv4/IPv6 
happy eyeballs that means maybe 4x TCP connections. Is anyone able to 
show an overall benefit amongst that?

* DNS SRV records easily perform end-to-end detection. But HTTP 
requires next-hop detection. That is at best a useful optimization for 
the last-mile of connections IMO. Not useful at all for the main 
client->server/proxy architecture of HTTP.

>
> I think we should live with upgrade only, as long as clients can
> cache the knowledge that a certain server supports 2.0, so that they
> can skip the upgrade the next time. The extra roundtrip on a first
> encounter is not that bad.

I agree with this.

* it exists and is already well defined.
* it has existing implementations we can leverage.
* it operates on next-hop independent of any connection complexity and 
most legacy software.
* ANY hop can attempt to optimize/Upgrade.
* it has a safe failure mode resulting in 1.1 being used.
* worst-case cannot be worse than standard 1.1 performance (since 
failure ist to use 1.1).

The downside is all in the implementation code specifics. ie it is 
*our* problem to find easy implentation methods, not impacting the 
users.

Amos
Received on Wednesday, 22 August 2012 02:44:00 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 22 August 2012 02:44:06 GMT