Re: Optimizations vs Functionality vs Architecture

On Tue, Aug 21, 2012 at 8:36 AM, Phillip Hallam-Baker <hallam@gmail.com>wrote:

> On Tue, Aug 21, 2012 at 11:13 AM, Yoav Nir <ynir@checkpoint.com> wrote:
> >
>
> >> A closer analysis shows that it would be even better if secured
> >> and unsecured requests could share a connection (think intermediary
> >> to server for instance.)
> >
> > I'm not in the TLS-all-the-time camp, but why would you want to mix
> secure and insecure content? How would the user know what parts of the
> screen that he sees are secure and what parts are not?
>
> +1
>
> The mixed content thing is a horror show.
>
>
Perhaps when all the traffic is from a single client originating from a
single user-agent within a single application context, I agree... but when
you're talking about traffic between an intermediary handling requests from
a broad diversity of user-agents communicating with an origin server that
legitimately offers both secure and insecure content, the ability to pass
insecure and secure traffic within a single TCP connection [1] could
realize some fairly significant benefits.

[1] http://tools.ietf.org/html/draft-snell-httpbis-keynego-00


>
> >> Such a mixed mode might even allow oppotunistic negotiation of
> >> security, even before the user has pushed the "login" button.
> >
> > I don't like opportunistic. Security is all about guarantees.
> Opportunistic encryption doesn't give you any guarantees. A statement that
> there's a 95% chance that you have 256-bit security is meaningless, while a
> statement that you definitely have 64-bit security eliminates most
> attackers.
>
>
Opportunistic does not mean "no guarantee" but then again, "opportunistic"
is a perhaps a poor word choice... I prefer "in session" or "on demand"
encryption, allowing the client and server to selectively apply privacy
protections as necessary without requiring the existing tcp/ip connection
to be re-established. Rules can be established, similar to the Same Origin
Policy, that define how various user agents are to best make use of those
mechanisms in specific contexts -- including when NOT to use mixed traffic
-- but the protocol itself does not suffer from the mixing of plain text
and encrypted content.

- James


> I disagree, there is a value to increase the proportion of encrypted
> traffic so that encryption is no longer such a red flag for traffic
> analysis.
>
> Security is all about risk management, not risk elimination. Back in
> 1995 we all got that wrong. Some of us came to our senses earlier than
> others, hence the stand up row on the Concourse of RSA back in 2000 or
> so.
>
> Opportunistic encryption does actually raise the bar on attacks quite
> substantially and in particular it increases costs and reduces scope.
> An attack that will defeat 95% of the script kiddies but not protect
> against the RBN is actually quite useful in practice.
>
> It is not a solution I would like to see for credit card transactions
> or anything that matters, but training wheels for crypto have their
> uses.
>
>
>
> >>> * 2020 Latency: Performance when the remaining legacy can be ignored
> >>> as far as latency issues are concerned, people using 2012 gear in 2020
> >>> are not going to be getting world class latency anyway.
> >>
> >> Not agreed, I doubt HTTP/2.0 will have more than 40% market share
> >> by 2020.
> >
> > Depends of how you measure market share. By sheer number of servers
> (counting all those web configuration screens for home routers and
> toasters) - yes. But we are very likely to have support in the big websites
> that people use a lot, so we could have a much higher number for percentage
> of requests.
>
> +1
>
> What really matters is the 'expected pain index'.
>
> I think that if the IETF ever told network equipment vendors what was
> expected of middleboxes with clarity that the vendors would be only
> too happy to comply. Instead we have a nonsense situation where the
> NAT vendors were at best ignored, at worse actively sabotaged (IPSEC)
> and left to make things work by trial and error.
>
> Tell folk what to do to make things work and they are much more likely
> to do it right.
>
>
> > Even now, Chrome + newer Firefox have together >50% share. Queries to
> Google services make up 6% of the web, so SPDY already has a market share
> of 3%. If we also add big sites like Facebook and common implementations
> like Apache, that percentage could go up really fast.
>
> The question is what percentage of home routers, firewalls etc. would
> get in the way of a solution that allowed SRV to be used for
> signalling.
>
> --
> Website: http://hallambaker.com/
>
>

Received on Tuesday, 21 August 2012 22:57:51 UTC