W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2011

Re: [apps-discuss] [OAUTH-WG] [http-state] HTTP MAC Authentication Scheme

From: Breno de Medeiros <breno@google.com>
Date: Wed, 8 Jun 2011 15:26:03 -0700
Message-ID: <BANLkTi=98GodWuNCfU9bKZ389B7QG3ow+OjJHH9zCKF8tn8TDA@mail.gmail.com>
To: Nico Williams <nico@cryptonector.com>
Cc: Tim <tim-projects@sentinelchicken.org>, OAuth WG <oauth@ietf.org>, HTTP Working Group <ietf-http-wg@w3.org>, "apps-discuss@ietf.org" <apps-discuss@ietf.org>, "http-state@ietf.org" <http-state@ietf.org>
On Tue, Jun 7, 2011 at 17:07, Nico Williams <nico@cryptonector.com> wrote:
> On Tue, Jun 7, 2011 at 6:41 PM, Tim <tim-projects@sentinelchicken.org> wrote:
>> I have to agree with Nico here.  In almost all cases I assert that, on
>> typical modern networks:
>>
>>  let P = difficulty of passive attack
>>  let M = difficulty of active (man-in-the-middle) attack
>>
>> O(P) = O(M)
>> .
>>
>> This isn't to say the "real world" difficulty of an active attack is
>> just as easy, but it is within a constant factor.  If someone has
>> published a tool that conducts MitM attacks for the specific protocol
>> you're dealing with, the difference in difficulty clearly becomes
>> marginal.  Consider the complexity of the attacks implemented by
>> sslstrip and yet the relative ease with which you can use it to MitM
>> all SSL connections.
>
> Exactly, and very well put.
>
> Active attacks sound harder, and they do actually require more work,
> but in many cases that work can be automated, and once automated there
> can be no difference in effort required to mount an active attack
> versus a passive one.
>
> Do we suppose that this proposal can get past secdir, IESG, and IETF
> reviews as-is?  I doubt it.
>
> Here's another issue: some of you are saying that an application using
> this extension will be using TLS for some things but not others, which
> presumes a TLS session.  Does using TLS _with_ session resumption
> _and_ HTTP/1.1 pipelining for all requests really cost that much more
> in latency and compute (and electric) power than the proposed
> alternative?  I seriously doubt it, and I'd like to see some real
> analysis showing that I'm wrong before I'd accept such a rationale for
> this sort of proposal.

Google has performed detailed analysis of SSL performance after
several optimizations and we have concluded that the answer is 'no
significant overhead' as you suggest. Indeed, in some workload
situations it may be actually cheaper to serve SSL traffic because
there is reduction in network latency by avoiding bad proxies. We have
published some results here:
http://www.imperialviolet.org/2010/06/25/overclocking-ssl.html

>
> Or perhaps the motivation relates to accidental leakage of "secure"
> cookies in non-secure contexts.  But why not just fix the clients in
> that case?
>
> Nico
> --
> _______________________________________________
> apps-discuss mailing list
> apps-discuss@ietf.org
> https://www.ietf.org/mailman/listinfo/apps-discuss
>



-- 
--Breno
Received on Wednesday, 8 June 2011 22:26:38 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:51:41 GMT