Re: something I don't get about the current plan...

------ Original Message ------
From: "Mike Belshe" <mike@belshe.com>
To: "Adrien de Croy" <adrien@qbik.com>
Cc: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>; "ietf-http-wg@w3.org 
Group" <ietf-http-wg@w3.org>
Sent: 18/11/2013 12:07:15 p.m.
Subject: Re: something I don't get about the current plan...
>
>
>
>On Sun, Nov 17, 2013 at 2:56 PM, Adrien de Croy <adrien@qbik.com> 
>wrote:
>>
>>how confident are we that the infrastructure can even handle everyone 
>>having a cert?
>
>100% certain.
I wish I shared your certainty there.

>
>
>>
>>what happens when some script kiddie with a bot net decides to DoS 
>>ocsp.verisign.com?
>
>The same thing that happens when verisign goes down for a few 
>minutes...  Browsers give the green lights to users (with a few 
>exceptions) because it would be silly to take down the net just because 
>verisign is down.
browsers may make that decision, but I've seen problems with agents 
barfing because CRL or OCSP servers are unavailable, and so to them, the 
site is effectively down.

So this effectively creates a new central point of failure, which could 
affect millions of sites.

How long will a browser continue to show the site properly while the 
OCSP server is unavailable?  Not forever surely, so if the DoS went on 
and was effective for a long time, the problems would increase?

I think we'd need to fix this, but if we are to alllow revocation of 
certs (which I think is necessary), then there will always be an 
increased infrastructure load as more certs are deployed and sites using 
them are hit.

Adrien

>
>http://blog.spiderlabs.com/2011/04/certificate-revocation-behavior-in-modern-browsers.html
>
>Of course, we need to fix this too, even for our existing web.  But the 
>fact that TLS isn't a panacea doesn't mean that it is useless either.
>
>mike
>
>
>
>>
>>I have enough trouble with users complaining about accessing the ocsp 
>>server for the cert we have already.
>>
>>
>>
>>
>>
>>------ Original Message ------
>>From: "Mike Belshe" <mike@belshe.com>
>>To: "Stephen Farrell" <stephen.farrell@cs.tcd.ie>
>>Cc: "ietf-http-wg@w3.org Group" <ietf-http-wg@w3.org>
>>Sent: 18/11/2013 11:12:00 a.m.
>>Subject: Re: something I don't get about the current plan...
>>>
>>>
>>>
>>>On Sun, Nov 17, 2013 at 9:06 AM, Stephen Farrell 
>>><stephen.farrell@cs.tcd.ie> wrote:
>>>>
>>>>
>>>>On 11/17/2013 04:53 PM, Mike Belshe wrote:
>>>> > OK - I see.
>>>> >
>>>> > I think you're mixing current stats (only 30% of sites today have 
>>>>certs -
>>>> > seems high?) with incompatibilities - 100% of sites can get certs 
>>>>today if
>>>> > they want them.  So HTTP/2 requiring certs would not be 
>>>>introducing any
>>>> > technical incompatibility (like running on port 100 would).
>>>>
>>>>But 100% of firewalls could open port 100 too.
>>>
>>>We measured this inside Google - its not 100% - but it was pretty 
>>>good.  Maybe WillChan has those numbers.
>>>
>>>
>>>>
>>>>And saying 100% of sites could get certs ignores the reality
>>>>that they do not and nobody so far seems to have a plan to
>>>>increase the 30%.
>>>>
>>>
>>>I'm not understanding why they can't get certs?
>>>
>>>Do you mean they really can't, or that they just don't want to or 
>>>believe its too painful?
>>>
>>>I agree that the tooling is painful today for TLS.  But I think it 
>>>will get better if we use more TLS in HTTP/2.
>>>
>>>As an example - have you tried being an apple developer?  They make 
>>>you do all this stuff (get a cert issued, keep it current, etc) to 
>>>ship a product.  They don't allow random apps without them.  I think 
>>>a reasonable metaphor can be drawn between a website operator and a 
>>>apple app developer - both are producing content for a large network 
>>>of consumers.  Consumers have a reasonable expectation that the 
>>>content provider has been authenticated in some way, even if not 
>>>perfect...
>>>
>>>There are a million apps in the app store, and every one of them had 
>>>to go get a cert and keep it up to date.  Why is it harder for the 
>>>top-1million websites to do this?
>>>
>>>Mike
>>>
>>>
>>>
>>>>S.
>>>>
>>>>
>>>> >
>>>> > Mike
>>>> >
>>>> >
>>>> > On Sun, Nov 17, 2013 at 8:40 AM, Stephen Farrell
>>>> > <stephen.farrell@cs.tcd.ie>wrote:
>>>> >
>>>> >>
>>>> >>
>>>> >> On 11/17/2013 04:36 PM, Mike Belshe wrote:
>>>> >>> I'm not 100% sure I read your question right, but I think I get 
>>>>it.
>>>> >>>
>>>> >>> The difference is between what breaks the server, what breaks in 
>>>>the
>>>> >>> client, and what breaks in the middleware.  The middleware is 
>>>>the nasty
>>>> >>> stuff that blocks us worst, the two parties that are trying to
>>>> >> communicate
>>>> >>> (e.g. the client and server) can't fix it.
>>>> >>>
>>>> >>> So, the 10% failure rate by running non-HTTP/1.1 over port 80 or 
>>>>by
>>>> >> running
>>>> >>> on port 100 would be because you setup your server properly and 
>>>>the
>>>> >>> *client* can't
>>>> >>> connect to you because the middleware is broken.
>>>> >>>
>>>> >>> But ~100% of clients can current connect over port 443, navigate 
>>>>the
>>>> >>> middleware, negotiate HTTP/2, and work just fine.
>>>> >>
>>>> >> But that last isn't true is it if only 30% of sites have certs
>>>> >> that chain up to a browser-trusted root, as implied by the
>>>> >> reference site. Hence my question.
>>>> >>
>>>> >> S.
>>>> >>
>>>> >>>
>>>> >>> Mike
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> On Sun, Nov 17, 2013 at 8:09 AM, Stephen Farrell
>>>> >>> <stephen.farrell@cs.tcd.ie>wrote:
>>>> >>>
>>>> >>>>
>>>> >>>> So the current plan is for server-authenticated https
>>>> >>>> everywhere on the public web. If that works, great. But
>>>> >>>> I've a serious doubt.
>>>> >>>>
>>>> >>>> 30% of sites use TLS that chains up to a browser-trusted
>>>> >>>> root (says [1]). This plan has nothing whatsoever to say
>>>> >>>> (so far) about how that will get to anything higher.
>>>> >>>>
>>>> >>>> Other aspects of HTTP/2.0 appear to require reaching a
>>>> >>>> "99.9% ok" level before being acceptable, e.g. the port
>>>> >>>> 80 vs not-80 discussion.
>>>> >>>>
>>>> >>>> That represents a clear inconsistency in the arguments for
>>>> >>>> the current plan. If its not feasible to run on e.g. port
>>>> >>>> 100 because of a 10% failure rate, then how is it feasible
>>>> >>>> to assume that 60% of sites will do X (for any X, including
>>>> >>>> "get a cert"), to get to the same 90% figure which is
>>>> >>>> apparently unacceptable, when there's no plan for more-X
>>>> >>>> and there's reason to think getting more web sites to do
>>>> >>>> this will in fact be very hard at best?
>>>> >>>>
>>>> >>>> I just don't get that, and the fact that the same people are
>>>> >>>> making both arguments seems troubling, what am I missing
>>>> >>>> there?
>>>> >>>>
>>>> >>>> I would love to see a credible answer to this, because I'd
>>>> >>>> love to see the set of sites doing TLS server-auth "properly"
>>>> >>>> be much higher, but I have not seen anything whatsoever about
>>>> >>>> how that might happen so far.
>>>> >>>>
>>>> >>>> And devices that are not traditional web sites represent a
>>>> >>>> maybe even more difficult subset of this problem. Yet the
>>>> >>>> answer for the only such example raised (printers, a real
>>>> >>>> example) was "use http/1.1" which seems to me to be a bad
>>>> >>>> answer, if HTTP/2.0 is really going to succeed HTTP/1.1.
>>>> >>>>
>>>> >>>> Ta,
>>>> >>>> S.
>>>> >>>>
>>>> >>>> PS: In case its not clear, if there were a credible way to
>>>> >>>> get that 30% to 90%+ and address devices, I'd be delighted.
>>>> >>>>
>>>> >>>> PPS: As I said before, my preference is for option A in
>>>> >>>> Mark's set - use opportunistic encryption for http:// URIs
>>>> >>>> in HTTP/2.0. So if this issue were a fatal flaw, then I'd
>>>> >>>> be arguing we should go to option A and figure out how to
>>>> >>>> handle mixed-content for that.
>>>> >>>>
>>>> >>>> [1] 
>>>>http://w3techs.com/technologies/overview/ssl_certificate/all
>>>> >>>>
>>>> >>>>
>>>> >>>
>>>> >>
>>>> >
>>>
>

Received on Sunday, 17 November 2013 23:11:39 UTC