Re: something I don't get about the current plan...

OK - I see.

I think you're mixing current stats (only 30% of sites today have certs -
seems high?) with incompatibilities - 100% of sites can get certs today if
they want them.  So HTTP/2 requiring certs would not be introducing any
technical incompatibility (like running on port 100 would).

Mike


On Sun, Nov 17, 2013 at 8:40 AM, Stephen Farrell
<stephen.farrell@cs.tcd.ie>wrote:

>
>
> On 11/17/2013 04:36 PM, Mike Belshe wrote:
> > I'm not 100% sure I read your question right, but I think I get it.
> >
> > The difference is between what breaks the server, what breaks in the
> > client, and what breaks in the middleware.  The middleware is the nasty
> > stuff that blocks us worst, the two parties that are trying to
> communicate
> > (e.g. the client and server) can't fix it.
> >
> > So, the 10% failure rate by running non-HTTP/1.1 over port 80 or by
> running
> > on port 100 would be because you setup your server properly and the
> > *client* can't
> > connect to you because the middleware is broken.
> >
> > But ~100% of clients can current connect over port 443, navigate the
> > middleware, negotiate HTTP/2, and work just fine.
>
> But that last isn't true is it if only 30% of sites have certs
> that chain up to a browser-trusted root, as implied by the
> reference site. Hence my question.
>
> S.
>
> >
> > Mike
> >
> >
> >
> >
> >
> > On Sun, Nov 17, 2013 at 8:09 AM, Stephen Farrell
> > <stephen.farrell@cs.tcd.ie>wrote:
> >
> >>
> >> So the current plan is for server-authenticated https
> >> everywhere on the public web. If that works, great. But
> >> I've a serious doubt.
> >>
> >> 30% of sites use TLS that chains up to a browser-trusted
> >> root (says [1]). This plan has nothing whatsoever to say
> >> (so far) about how that will get to anything higher.
> >>
> >> Other aspects of HTTP/2.0 appear to require reaching a
> >> "99.9% ok" level before being acceptable, e.g. the port
> >> 80 vs not-80 discussion.
> >>
> >> That represents a clear inconsistency in the arguments for
> >> the current plan. If its not feasible to run on e.g. port
> >> 100 because of a 10% failure rate, then how is it feasible
> >> to assume that 60% of sites will do X (for any X, including
> >> "get a cert"), to get to the same 90% figure which is
> >> apparently unacceptable, when there's no plan for more-X
> >> and there's reason to think getting more web sites to do
> >> this will in fact be very hard at best?
> >>
> >> I just don't get that, and the fact that the same people are
> >> making both arguments seems troubling, what am I missing
> >> there?
> >>
> >> I would love to see a credible answer to this, because I'd
> >> love to see the set of sites doing TLS server-auth "properly"
> >> be much higher, but I have not seen anything whatsoever about
> >> how that might happen so far.
> >>
> >> And devices that are not traditional web sites represent a
> >> maybe even more difficult subset of this problem. Yet the
> >> answer for the only such example raised (printers, a real
> >> example) was "use http/1.1" which seems to me to be a bad
> >> answer, if HTTP/2.0 is really going to succeed HTTP/1.1.
> >>
> >> Ta,
> >> S.
> >>
> >> PS: In case its not clear, if there were a credible way to
> >> get that 30% to 90%+ and address devices, I'd be delighted.
> >>
> >> PPS: As I said before, my preference is for option A in
> >> Mark's set - use opportunistic encryption for http:// URIs
> >> in HTTP/2.0. So if this issue were a fatal flaw, then I'd
> >> be arguing we should go to option A and figure out how to
> >> handle mixed-content for that.
> >>
> >> [1] http://w3techs.com/technologies/overview/ssl_certificate/all
> >>
> >>
> >
>

Received on Sunday, 17 November 2013 16:54:18 UTC