Re: something I don't get about the current plan...

On Sun, Nov 17, 2013 at 9:06 AM, Stephen Farrell
<stephen.farrell@cs.tcd.ie>wrote:

>
>
> On 11/17/2013 04:53 PM, Mike Belshe wrote:
> > OK - I see.
> >
> > I think you're mixing current stats (only 30% of sites today have certs -
> > seems high?) with incompatibilities - 100% of sites can get certs today
> if
> > they want them.  So HTTP/2 requiring certs would not be introducing any
> > technical incompatibility (like running on port 100 would).
>
> But 100% of firewalls could open port 100 too.
>

We measured this inside Google - its not 100% - but it was pretty good.
 Maybe WillChan has those numbers.



>
> And saying 100% of sites could get certs ignores the reality
> that they do not and nobody so far seems to have a plan to
> increase the 30%.
>
>
I'm not understanding why they can't get certs?

Do you mean they really can't, or that they just don't want to or believe
its too painful?

I agree that the tooling is painful today for TLS.  But I think it will get
better if we use more TLS in HTTP/2.

As an example - have you tried being an apple developer?  They make you do
all this stuff (get a cert issued, keep it current, etc) to ship a product.
 They don't allow random apps without them.  I think a reasonable metaphor
can be drawn between a website operator and a apple app developer - both
are producing content for a large network of consumers.  Consumers have a
reasonable expectation that the content provider has been authenticated in
some way, even if not perfect...

There are a million apps in the app store, and every one of them had to go
get a cert and keep it up to date.  Why is it harder for the top-1million
websites to do this?

Mike




> S.
>
>
> >
> > Mike
> >
> >
> > On Sun, Nov 17, 2013 at 8:40 AM, Stephen Farrell
> > <stephen.farrell@cs.tcd.ie>wrote:
> >
> >>
> >>
> >> On 11/17/2013 04:36 PM, Mike Belshe wrote:
> >>> I'm not 100% sure I read your question right, but I think I get it.
> >>>
> >>> The difference is between what breaks the server, what breaks in the
> >>> client, and what breaks in the middleware.  The middleware is the nasty
> >>> stuff that blocks us worst, the two parties that are trying to
> >> communicate
> >>> (e.g. the client and server) can't fix it.
> >>>
> >>> So, the 10% failure rate by running non-HTTP/1.1 over port 80 or by
> >> running
> >>> on port 100 would be because you setup your server properly and the
> >>> *client* can't
> >>> connect to you because the middleware is broken.
> >>>
> >>> But ~100% of clients can current connect over port 443, navigate the
> >>> middleware, negotiate HTTP/2, and work just fine.
> >>
> >> But that last isn't true is it if only 30% of sites have certs
> >> that chain up to a browser-trusted root, as implied by the
> >> reference site. Hence my question.
> >>
> >> S.
> >>
> >>>
> >>> Mike
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On Sun, Nov 17, 2013 at 8:09 AM, Stephen Farrell
> >>> <stephen.farrell@cs.tcd.ie>wrote:
> >>>
> >>>>
> >>>> So the current plan is for server-authenticated https
> >>>> everywhere on the public web. If that works, great. But
> >>>> I've a serious doubt.
> >>>>
> >>>> 30% of sites use TLS that chains up to a browser-trusted
> >>>> root (says [1]). This plan has nothing whatsoever to say
> >>>> (so far) about how that will get to anything higher.
> >>>>
> >>>> Other aspects of HTTP/2.0 appear to require reaching a
> >>>> "99.9% ok" level before being acceptable, e.g. the port
> >>>> 80 vs not-80 discussion.
> >>>>
> >>>> That represents a clear inconsistency in the arguments for
> >>>> the current plan. If its not feasible to run on e.g. port
> >>>> 100 because of a 10% failure rate, then how is it feasible
> >>>> to assume that 60% of sites will do X (for any X, including
> >>>> "get a cert"), to get to the same 90% figure which is
> >>>> apparently unacceptable, when there's no plan for more-X
> >>>> and there's reason to think getting more web sites to do
> >>>> this will in fact be very hard at best?
> >>>>
> >>>> I just don't get that, and the fact that the same people are
> >>>> making both arguments seems troubling, what am I missing
> >>>> there?
> >>>>
> >>>> I would love to see a credible answer to this, because I'd
> >>>> love to see the set of sites doing TLS server-auth "properly"
> >>>> be much higher, but I have not seen anything whatsoever about
> >>>> how that might happen so far.
> >>>>
> >>>> And devices that are not traditional web sites represent a
> >>>> maybe even more difficult subset of this problem. Yet the
> >>>> answer for the only such example raised (printers, a real
> >>>> example) was "use http/1.1" which seems to me to be a bad
> >>>> answer, if HTTP/2.0 is really going to succeed HTTP/1.1.
> >>>>
> >>>> Ta,
> >>>> S.
> >>>>
> >>>> PS: In case its not clear, if there were a credible way to
> >>>> get that 30% to 90%+ and address devices, I'd be delighted.
> >>>>
> >>>> PPS: As I said before, my preference is for option A in
> >>>> Mark's set - use opportunistic encryption for http:// URIs
> >>>> in HTTP/2.0. So if this issue were a fatal flaw, then I'd
> >>>> be arguing we should go to option A and figure out how to
> >>>> handle mixed-content for that.
> >>>>
> >>>> [1] http://w3techs.com/technologies/overview/ssl_certificate/all
> >>>>
> >>>>
> >>>
> >>
> >
>

Received on Sunday, 17 November 2013 22:12:29 UTC