Re: Optimizations vs Functionality vs Architecture

On Aug 21, 2012, at 4:36 PM, Poul-Henning Kamp wrote:

> In message <CAMm+LwjSVHzRQS3W4NLBQfe+Bmpk2c5ovuOtrNjOSx1EDBDG0g@mail.gmail.com>
> , Phillip Hallam-Baker writes:
> 
>> Unlike most protocol proposals, HTTP/2 is big enough to drive changes
>> in infrastructure. If HTTP/2 will go faster over a purple Internet
>> connection then pretty soon there will be purple connections being
>> installed. 
> 
> Provided, and this is a very big variable, that HTTP/2 brings enough
> benefits to pay for the purple internet.
> 
> IPv6 should have taught everybody, that just because the IETF is
> in party mode, doesn't mean that wallets fly out of pockets.

Yup. While I agree that everyone would prefer the web to be faster, I am not convinced that a lot of people are bothered enough by how slow the web is to go through upgrade pains.

> I would advocate a much more humble attitude, where we design
> and architect expecting few changes, but such that we can
> benefit from all we can cause to happen.
> 
>> It is pretty clear to me that we are going to have to support some
>> form of in-band upgrade.
> 
> This is actually one of the things users are asking for:  Upgrade
> from unsecure to secure connection without a new TCP.

Really?  RFC 2817 has been around for over 12 years, and has seen very little use.

> A closer analysis shows that it would be even better if secured
> and unsecured requests could share a connection (think intermediary
> to server for instance.)

I'm not in the TLS-all-the-time camp, but why would you want to mix secure and insecure content? How would the user know what parts of the screen that he sees are secure and what parts are not?

> Such a mixed mode might even allow oppotunistic negotiation of
> security, even before the user has pushed the "login" button.

I don't like opportunistic. Security is all about guarantees. Opportunistic encryption doesn't give you any guarantees. A statement that there's a 95% chance that you have 256-bit security is meaningless, while a statement that you definitely have 64-bit security eliminates most attackers.

>> But even if that turns out to be the fastest
>> choice in 2012 deciding to only do in-band upgrade means that we are
>> permanently locked into a sub-optimal solution in perpetuity.
> 
> No, thats not a given.
> 
> Nothing prevents us from designing a procedure which allows for
> both upgrades and downgrades, and leave it to protocol users to
> decide when they think which one has best probability of suceeding.
> 
> We should do that.

We should if it's possible. Suppose HTTP/2.0 looks much like the SPDY draft. How can you ever get a current HTTP/1 server to reply to this? The only way you can do that is if you make HTTP/2 at least at the start resemble HTTP/1. And then you have the in-band upgrade forever, just like Phillip said.

>> * Can we move to a HTTP/3.0 in this scheme? If not its a non starter.
> 
> Agreed, 100%.
> 
>> * 2016 Latency: Performance after 80% of the network has been upgraded
>> to support the new spec as fast as possible
> 
> Not agreed, see above.
> 
>> * 2020 Latency: Performance when the remaining legacy can be ignored
>> as far as latency issues are concerned, people using 2012 gear in 2020
>> are not going to be getting world class latency anyway.
> 
> Not agreed, I doubt HTTP/2.0 will have more than 40% market share
> by 2020.

Depends of how you measure market share. By sheer number of servers (counting all those web configuration screens for home routers and toasters) - yes. But we are very likely to have support in the big websites that people use a lot, so we could have a much higher number for percentage of requests.

Even now, Chrome + newer Firefox have together >50% share. Queries to Google services make up 6% of the web, so SPDY already has a market share of 3%. If we also add big sites like Facebook and common implementations like Apache, that percentage could go up really fast.

Yoav

Received on Tuesday, 21 August 2012 15:13:43 UTC