W3C home > Mailing lists > Public > ietf-http-wg@w3.org > July to September 2012

Re: Optimizations vs Functionality vs Architecture

From: Phillip Hallam-Baker <hallam@gmail.com>
Date: Tue, 21 Aug 2012 11:36:07 -0400
Message-ID: <CAMm+Lwi=BRgiFXrhdjwjdEDxiVjMn7Ah+62fe0B85Kq3dtGp8w@mail.gmail.com>
To: Yoav Nir <ynir@checkpoint.com>
Cc: Poul-Henning Kamp <phk@phk.freebsd.dk>, "ietf-http-wg@w3.org" <ietf-http-wg@w3.org>
On Tue, Aug 21, 2012 at 11:13 AM, Yoav Nir <ynir@checkpoint.com> wrote:
>

>> A closer analysis shows that it would be even better if secured
>> and unsecured requests could share a connection (think intermediary
>> to server for instance.)
>
> I'm not in the TLS-all-the-time camp, but why would you want to mix secure and insecure content? How would the user know what parts of the screen that he sees are secure and what parts are not?

+1

The mixed content thing is a horror show.


>> Such a mixed mode might even allow oppotunistic negotiation of
>> security, even before the user has pushed the "login" button.
>
> I don't like opportunistic. Security is all about guarantees. Opportunistic encryption doesn't give you any guarantees. A statement that there's a 95% chance that you have 256-bit security is meaningless, while a statement that you definitely have 64-bit security eliminates most attackers.

I disagree, there is a value to increase the proportion of encrypted
traffic so that encryption is no longer such a red flag for traffic
analysis.

Security is all about risk management, not risk elimination. Back in
1995 we all got that wrong. Some of us came to our senses earlier than
others, hence the stand up row on the Concourse of RSA back in 2000 or
so.

Opportunistic encryption does actually raise the bar on attacks quite
substantially and in particular it increases costs and reduces scope.
An attack that will defeat 95% of the script kiddies but not protect
against the RBN is actually quite useful in practice.

It is not a solution I would like to see for credit card transactions
or anything that matters, but training wheels for crypto have their
uses.



>>> * 2020 Latency: Performance when the remaining legacy can be ignored
>>> as far as latency issues are concerned, people using 2012 gear in 2020
>>> are not going to be getting world class latency anyway.
>>
>> Not agreed, I doubt HTTP/2.0 will have more than 40% market share
>> by 2020.
>
> Depends of how you measure market share. By sheer number of servers (counting all those web configuration screens for home routers and toasters) - yes. But we are very likely to have support in the big websites that people use a lot, so we could have a much higher number for percentage of requests.

+1

What really matters is the 'expected pain index'.

I think that if the IETF ever told network equipment vendors what was
expected of middleboxes with clarity that the vendors would be only
too happy to comply. Instead we have a nonsense situation where the
NAT vendors were at best ignored, at worse actively sabotaged (IPSEC)
and left to make things work by trial and error.

Tell folk what to do to make things work and they are much more likely
to do it right.


> Even now, Chrome + newer Firefox have together >50% share. Queries to Google services make up 6% of the web, so SPDY already has a market share of 3%. If we also add big sites like Facebook and common implementations like Apache, that percentage could go up really fast.

The question is what percentage of home routers, firewalls etc. would
get in the way of a solution that allowed SRV to be used for
signalling.

-- 
Website: http://hallambaker.com/
Received on Tuesday, 21 August 2012 15:36:34 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 21 August 2012 15:36:41 GMT