Re: 2 questions

Glen wrote:
> 
> 1. What were the reasons for HTTP/2 not requiring TLS?
> 

Nutshell answer to Glen's original question, is consensus wasn't
reached in this WG for requiring TLS in HTTP/2.

>
> Is there a significant performance consideration, is it related to
> the cost of certificates (which is now fairly low or even free), or
> are there other technical reasons?
> 

This entire thread is a discussion of why consensus wasn't reached,
within the framing of your original post, but that's still the answer to
your original question. What follows from me on the topic, is outside
that framework, or I wouldn't post. Because sometimes, technical and
social/political overlap, and that's where we are with this.

>
> It would be nice if the web was just "secure by default", and I would
> have thought that now would be the right time to move in that
> direction.
> 

My personal opinion, as a content publisher, is that I don't want to
make promises I can't keep. Like "securing" or even privatizing your
visit to my website, if I don't even know who you are or where you come
from. Especially if my URLs leak your private info in REFERER...

My analogy is a speakeasy. If you don't want folks knowing you come in
and out the front door, you'd better have a password for the back door
(Copacabana shot in "Goodfellas" comes to mind), in which case I need
to know who you are. Ubiquitous HTTPS requires me to give everyone that
password, so philosophically speaking, what's the point of requiring it?

I'm +1 on the OpSec draft because of this. I don't care how much you
tip, you're not coming in my back door unless I know who you are. In
which case, you'll be taken care of to the best of my abilities. Which
includes not leaking your personal data to advertisers via URLs in the
REFERER header, which is a prominent Web problem ubiquitous HTTPS, and
even OpSec, don't begin to solve.

Ultimately, I don't believe in a "secure by default" vision of the Web.
What I'd like is to be able to verify to my site visitors, that their
viewed content hasn't been altered. By ad-injection or whatever, if
that's affecting a login page, I don't want them to log in from that
page. Because whether it's OpSec or ubiquitous HTTPS, my inability to
detect content alteration as a publisher, is my problem. For which I
don't see any solution proposed, so my +1 to OpSec is "lesser of two
evils".

IMHO, the closest we've come in HTTP was the now-deprecated Content-
Md5 header. It didn't play nice with Range requests. In retrospect,
instead of deprecating, we probably should have incorporated some means
to disable range requests when Content-Md5 was in play. Plus made it
registerable through IANA, i.e. now we'd have a Content-Sha256 header
or somesuch.

Just give me something where, from the server side, I can warn my
valued site visitors that something fishy's afoot between us. Instead
of promising them everything will be fine, by virtue of having 
browser-vendor-and-Google-approved HTTPS URLs which don't address the
main thing I could be doing wrong... But I guess I'm exonerated, if
everyone else buys into this flawed system, so What, me worry?

>
> Also, at least 2 of the major browser vendors have said that they
> won't be supporting HTTP/2 without TLS, so surely no one is going to
> want to run their website without it?
> 

More onerous is Google tying HTTPS usage to "PageRank" for no reason
that makes any sense, other than throwing their weight around. Does
nothing for the hugely prevalent problem of poor URI allocation-scheme
design, which makes me wonder if ubiquitous HTTPS isn't counter-
productive to actually solving a privacy problem it only purports to
band-aid. The larger issue to me are all the sites HTTPS doesn't "fix"
because from where I sit, those are the majority of the sites we access
which can actually hurt us, IOW not browsing the local newspaper.

Did I make a medical reference? It's because HTTPS didn't stop the
personal health information of every U.S. citizen who entered it into
healthcare.gov from being transmitted to every partner corporation
involved in that website. We're supposed to trust that this data won't
be used because that would be against the law, but wouldn't it have
been better if the Identification of Resources constraint got the same
sort of press EFF's drummed up for ubiquitous HTTPS without, IMO, quite
understanding the problem?

My painful analogy, is ubiquitous HTTPS only even tries to solve the
problem of keeping the horse from bolting through the open barn doors.
Wouldn't it be better to approach the problem from the perspective of
closing the barn doors, if the intention is to keep horses inside of it?

;-)

Sorry for the tl;dr to arrive at that point, but "it is what it is".

-Eric

Received on Tuesday, 31 March 2015 09:38:12 UTC