Re: 2 questions

Yes, that's why I said "this alone", i.e. this could be in addition to 
the proposed changes to the address bar.

Regarding the Firefox mockup, if unidentified means that you could be 
sending data to a server that is not owned by Gmail/Google, then I think 
that "Secure connection" is misleading. I think that it should show 
something like:

Encrypted connection        YES (in green)
Identified/authenticated     NO (in red)


On 2015/04/11 22:45, Jim Manico wrote:
> Whoa.
> Lack of well configured HTTPS is way more than just a breach of confidentiality. Integrity and Authenticity risks are at play here as well.
> Not only will I see your data over HTTP, but I can easily modify your content, inject JS and now I have a new attack vector beyond just looking at your data.
> But Glen, your idea is still awesome. I think any form post over HTTP should provide the user with a pretty dramatic warning to not hit submit or at lest explain the risk similar to Chromes current pinning warning.
> Cheers,
> --
> Jim Manico
> @Manicode
> (808) 652-3805
>> On Apr 11, 2015, at 10:08 AM, Glen <> wrote:
>> --
>> Jim Manico
>> @Manicode
>> (808) 652-3805
>>> On 2015/04/10 15:58, Ilari Liusvaara wrote:
>>>> On Fri, Apr 10, 2015 at 11:53:32AM +0200, Glen wrote:
>>>> (sending again as a subscriber, as I think this message went unnoticed)
>>>> Thanks for the replies.
>>>> 1. As far as I understand it (which is not very far), opportunistic
>>>> encryption is neither "by default" (since it requires extra server-side
>>>> configuration) nor secure (no MITM protection, etc.)
>>> Well, security is relative.
>>>> I'm okay with HTTP/2 without TLS, however (my opinion):
>>>> a) User agents MUST show a security warning before you submit data over HTTP
>>>> (you could have a "remember this choice" option per-user and per-domain). As
>>>> far as I know, this is not currently implemented in any browsers (I think if
>>>> you submit to an HTTP domain from an HTTPS one, you may receive a warning).
>>>> The main point is, it's more important that users know that they're on an
>>>> INSECURE domain, than it is that they are on a SECURE one (by then it's too
>>>> late).
>>> In the distant past, web browsers did have submit over HTTP warnings. Those
>>> were pretty universally turned off.
>>> However, AFAIK browsers have not and are not showing HTTP connections as
>>> actually insecure (there are plans to do so for Chrome). That is significant
>>> due to it being much easier to notice signal than absence of it[1].
>> I assume that you're referring to this ( I think that Firefox is considering the same thing (
>> In my opinion though, this alone (and at this time) is not a good idea. Firstly, many users won't even notice it – I'm pretty sure that many (less technical) users are more focused on the content area of a page than the address bar. Secondly, for those that *do* notice it, it may be confusing, as it would be displayed on the vast majority of websites (at least in the short to medium term), which makes it seem like there's something wrong with their system, as the sites that they usually visit are marked as insecure.
>> The better option, IMHO, would be to alert the user "in their face" (modal window) only when attempting to *submit data* over HTTP. Explaining that the data that they are about to submit may be viewed or intercepted by a third party is far more explicit, and forces them to make a more informed decision.
>>> EV has positive security indications, but it is of limited value because
>>> EV is not treated specially except for display purposes (e.g. there is no
>>> HSTS RequireEV directive).
>>>> b) All vendors should support it. If I decide that my site does not require
>>>> encryption (f.e. it's a read-only website or a website that runs within a
>>>> LAN [like a router page]), then I should not be forced to use it in order to
>>>> run over HTTP/2. I think that Mozilla and Google probably have good
>>>> intentions, but I don't think that they have made the right decision at all.
>>>> We don't want to go back to the stage where every browser was doing its own
>>>> thing, and causing massive headaches for developers and even end-users.
>>>> There are ways (see above) to make the web more secure (by default) without
>>>> forcing anything on anyone. It's kind of like smoking – it's bad for you,
>>>> and we should warn against it, but at the end of the day every person
>>>> reserves the right to do as they please (screw up their lungs, or submit
>>>> their (possibly) private information over an insecure connection.
>>> "Read-only website" may very well require encryption.
>>> - Access to public data may not itself be public.
>>> - Public data may need to be origin-authenticated.
>>> LAN is its own can of worms. The devices are often totally unmanaged (even
>>> if having CPU power to run TLS fully[2]), which causes lots of challenges.
>>> [Fortunately, these devices tend to reside in IP ranges distinct from
>>> anything else, like site locals, link locals and ULAs].
>> Yes, but if the content is unimportant/not likely to be targeted ("programming tutorials") or has little traffic ("my personal CV/résumé"), then using TLS should be the *developer's* choice, and browsing the ("insecure") site should be the *user's* choice.
>>>> 2. Not being able to safely compress content seems like a big problem. Are
>>>> there any (content) compression algorithms that are not susceptible to these
>>>> vulnerabilities, or has there been any discussion regarding the development
>>>> of a new algorithm to combat these issues? From what I know, compressing
>>>> content can have a significant (positive) effect on performance, so it would
>>>> be really unfortunate if this was no longer possible without exposing your
>>>> website to various security exploits.
>>> HTTP content compression works in HTTP/2. And HTTP/2 does its own header
>>> compression.
>>> Of course, if you have things like anti-CSRF tokens in the payload, those
>>> can't be safely compressed. In theory, it is possible to switch between
>>> compressed and uncompressed in the fly. In practice, the OOB signaling
>>> (between app and whatever compresses the data) required is unworkable.
>>  From what I understand, this would only give an attacker the ability to guess something within the page. For example, if you display the user's username on the page, they could guess that, but something like a session ID would be transmitted in a Cookie header, which would not be compressed, therefore it could not be guessed. Is that correct? If so, then it's not as big of an issue as I initially thought (unless I'm forgetting something obvious).
>> What about designing a compression algorithm that allows you to specify a set of substrings that should not be compressed? For example, all user input (query string, POST data, etc.) found in the response should remain uncompressed. That way, you could compress the majority of the page without the potential information disclosure. The same could be used for HPACK, perhaps on a per-header basis (don't ever compress the Cookie header, for example).
>> I don't know a lot about these things, so don't laugh if this is a really stupid idea! :-P
>>> [1] That's the basis of game named "Simon says".
>>> [2] Meaning can do TLS without PSK or creative hacks.
>>> -Ilari

Received on Sunday, 12 April 2015 09:00:53 UTC