Re: 2 questions

On Fri, Apr 10, 2015 at 11:53:32AM +0200, Glen wrote:
> (sending again as a subscriber, as I think this message went unnoticed)
> 
> Thanks for the replies.
> 
> 1. As far as I understand it (which is not very far), opportunistic
> encryption is neither "by default" (since it requires extra server-side
> configuration) nor secure (no MITM protection, etc.)

Well, security is relative.
 
> I'm okay with HTTP/2 without TLS, however (my opinion):
> 
> a) User agents MUST show a security warning before you submit data over HTTP
> (you could have a "remember this choice" option per-user and per-domain). As
> far as I know, this is not currently implemented in any browsers (I think if
> you submit to an HTTP domain from an HTTPS one, you may receive a warning).
> The main point is, it's more important that users know that they're on an
> INSECURE domain, than it is that they are on a SECURE one (by then it's too
> late).

In the distant past, web browsers did have submit over HTTP warnings. Those
were pretty universally turned off.

However, AFAIK browsers have not and are not showing HTTP connections as
actually insecure (there are plans to do so for Chrome). That is significant
due to it being much easier to notice signal than absence of it[1].

EV has positive security indications, but it is of limited value because
EV is not treated specially except for display purposes (e.g. there is no
HSTS RequireEV directive).

> b) All vendors should support it. If I decide that my site does not require
> encryption (f.e. it's a read-only website or a website that runs within a
> LAN [like a router page]), then I should not be forced to use it in order to
> run over HTTP/2. I think that Mozilla and Google probably have good
> intentions, but I don't think that they have made the right decision at all.
> We don't want to go back to the stage where every browser was doing its own
> thing, and causing massive headaches for developers and even end-users.
> There are ways (see above) to make the web more secure (by default) without
> forcing anything on anyone. It's kind of like smoking – it's bad for you,
> and we should warn against it, but at the end of the day every person
> reserves the right to do as they please (screw up their lungs, or submit
> their (possibly) private information over an insecure connection.

"Read-only website" may very well require encryption.
- Access to public data may not itself be public.
- Public data may need to be origin-authenticated.

LAN is its own can of worms. The devices are often totally unmanaged (even
if having CPU power to run TLS fully[2]), which causes lots of challenges.

[Fortunately, these devices tend to reside in IP ranges distinct from
anything else, like site locals, link locals and ULAs].

> 2. Not being able to safely compress content seems like a big problem. Are
> there any (content) compression algorithms that are not susceptible to these
> vulnerabilities, or has there been any discussion regarding the development
> of a new algorithm to combat these issues? From what I know, compressing
> content can have a significant (positive) effect on performance, so it would
> be really unfortunate if this was no longer possible without exposing your
> website to various security exploits.

HTTP content compression works in HTTP/2. And HTTP/2 does its own header
compression.

Of course, if you have things like anti-CSRF tokens in the payload, those
can't be safely compressed. In theory, it is possible to switch between
compressed and uncompressed in the fly. In practice, the OOB signaling
(between app and whatever compresses the data) required is unworkable.


[1] That's the basis of game named "Simon says".

[2] Meaning can do TLS without PSK or creative hacks.


-Ilari

Received on Friday, 10 April 2015 13:59:23 UTC