Re: [blink-dev] Re: Proposal: Marking HTTP As Non-Secure

On Dec 26, 2014 10:01 PM, "Austin William Wright" <aaa@bzfx.net> wrote:
>
>
>
> On Fri, Dec 26, 2014 at 7:43 PM, Jeffrey Walton <noloader@gmail.com>
wrote:
>>
>> > If we want to talk about the perspective of servers, the server has
options
>> > to demand a secure connection too; it simply has to deny plaintext
requests
>> > ...
>>
>> Actually, no. Middleware boxes and a class of active attackers can do
>> whatever they want.
>
>
> Ah yes, I wasn't even considering the class of active attacks, where
*any* attempt to go over cleartext would be vulnerable. Thanks for pointing
this out!
>
> (I was careful to specify that accepting the TCP connection, for the
purposes of redirecting, could leak information to eavesdroppers.)
>
>>
>>
>> The control to stop most of the intercept related attacks - public key
>> pinning - was watered down by the committee members to the point that
>> the attacker effectively controls the pinset. (Here, I'm making no
>> differentiation between the "good" bad guys and the "bad" bad guys
>> because its nearly impossible to differentiate between them).

To Jeffrey: can you please stop the ad hominem attacks? Especially when the
three authors have all worked on Chromium, and two are actively championing
this proposal? This sort of revisionist history does no one any good. It is
a simple threat model: If you give up administrative access to your
physical device, it is no longer your device. The thing you lament missing
due to some shadowy committee members (hi! No shadows here!) Is a simple
recognition of two things: if your device is physically owned, it is
physically owned, and no remote server can express a policy that clients
will not be able to override, short of Trusted Computing and Remote
Attestation (which is the tech term for saying once unicorns and fairies
cease their blood feud and restore Santa Claus to his rightful throne in
the North Pole... because it's all fiction).

I've avoided commenting all of the other times you've misrepresented how
this came to be, but lest it be seen that our silence is assent, I have to
at least call out this dissent.

>>
>> That is, the standard could have provided policy and the site could
>> have sent a policy that governed the pinset. The site could have
>> allowed an override or denied an override. But it was decided all
>> users must be subjected to the interception, so the policy elements
>> were not provided.
>>
>> So how's that for strategy: the user does not get a choice in their
>> secure connection and the site does not get a choice in its secure
>> connection. Rather, some unrelated externality makes the choice.
>
>
> I share your concerns, and more: Public Key Pinning (and Strict Transport
Security, for that matter) is awfully specific to HTTP, even though it has
nothing to do with HTTP at all. The proper place to put this information is
in DNS records. This has worked reasonably well for Email; or at least I
don't see anyone proposing we query SMTP servers for relevant security
metadata.

I assume that you aren't familiar with STARTTLS? The encryption and
security story for email is disastrously worse than anything for HTTP.

> So why do this for HTTP? (If one wants to answer that properly stopping
forged headers is less important than stopping plaintext cat pictures,
color me surprised; if it's because it's a better alternative than
unencrypted DNS records for those lacking DNSSEC, that's a better answer,
but it's still relegates HPKP and HSTS to the realm of "stopgap measure".)
>
> Austin.

Indeed. From the point of view of client applications, DNSSEC is a complete
and utter failure, and will remain so for the next decade, given hardware
cycles. If a decade sounds like a stopgap measure, when it was longer than
the relevant lifetime of sites like Myspace, so be it, but that seems like
a pretty good run to me.

>
>>
>>
>> Jeff
>>
>> On Fri, Dec 26, 2014 at 8:33 PM, Austin William Wright <aaa@bzfx.net>
wrote:
>> >
>> > On Wed, Dec 24, 2014 at 3:43 PM, Alex Russell <slightlyoff@google.com>
>> > wrote:
>> >> ...
>> >>
>> >> No, the request for a resource means something. Security is about the
>> >> quality of service in delivering said thing.
>> >
>> > Specifically, I refer to privacy and authentication (which seems to be
what
>> > "security" means here). There's many components to security, and
they're not
>> > all going to be desired at the same level for every request, sometimes
even
>> > at odds with each other. Deniability is a particularly expensive one to
>> > implement, often at significant monetary and performance cost to the
user.
>> > Deniability is simply not implemented by any website (or more
accurately,
>> > any authority) indexed by the major search engines.
>> >
>> > It's difficult to claim that deniability, though, is about "quality of
>> > service", but it is nonetheless considered something _very_ important
by
>> > Tor, and by users of it.
>> >
>> >>
>> >> > HTTP could mean, simply, "I don't care about security".
>> >>
>> >> Then user agents should be free to act on behalf of users to write off
>> >> correspondents who do not value the user enough to protect the
message in
>> >> transit.
>> >>
>> >> You're making claims about what UA's must do from the perspective of
>> >> servers; this is a clear agency problem misunderstanding.
>> >>
>> >> We're the user's agent, not the publisher's.
>> >
>> > When a user navigates to an https:// resource, or hits the checkbox for
>> > STARTTLS, or selects "Require encryption" from a drop-down, they, the
user,
>> > are *demanding* a secure connection (for someone's definition of
secure;
>> > traditionally this means privacy and authentication but need not
include
>> > deniability).
>> >
>> > If we want to talk about the perspective of servers, the server has
options
>> > to demand a secure connection too; it simply has to deny plaintext
requests
>> > (it could redirect, but this risks the user-agent sending sensitive
>> > information including request-uri that'll just be ignored), and TLS can
>> > specifically ask for or require a client TLS certificate for
authentication.
>> >
>> > The user-agent works for the user, not the other way around. Letting a
>> > user-agent intervene in the case of a plaintext request is imposing a
value
>> > on the user: It prioritizes privacy over "DO NOT BREAK USERSPACE!"
which is
>> > not *necessarily* true.
>> >
>> > If TLS were free or very cheap to implement, however, this would be a
>> > non-issue. Hence my treatment of ways to do this.
>> >
>> > Even if the http-as-non-secure proposal fully worked; it still
increases the
>> > cost of publishing content and barriers to entry. I recall Tim
Berners-Lee
>> > recollecting that requiring users mint a long-lived URI was concerning
cost;
>> > but there was no other way to design a decentralized Web. (In the end,
the
>> > biggest barrier to entry has been acquisition of a domain name and
hosting.)
>> >
>> > Of course, if efficiency and security are at odds, then we prefer
security.
>> > However, I see making TLS implementation cheaper as a viable
alternative to
>> > this proposal, and so is far more appealing.
>> >>
>> >> > A TLS failure means "Security is _required_ by the other party, but
>> >> > couldn't be set up and verified. Abort! Abort!"
>> >> >
>> >> > Purporting that plaintext and HTTPS failure is the same would be
>> >> > conditioning users to be less worried about HTTPS failures, where
there's a
>> >> > known _requirement_ for confidentiality.
>> >>
>> >> The UA could develop different (but also scary) UI for these
conditions.
>> >
>> > I am specifically referring to the part of the proposal that, in the
long
>> > term, specifies no UI difference between plaintext and failed-TLS.
>> >
>> > Would you care to make an alternate proposal with this change?
>> >>
>> >> Regardless, we have done a poor job communicating the effective system
>> >> model (that content is not tamper-evidentwhen served over HTTP).
Steps to
>> >> make this clearer aren't the same thing as throwing all distinction
>> >> overboard.
>> >>
>> >> > There may be some value in saying "as with many network requests,
>> >> > request/submission is being sent in the clear. Anyone on your
network will
>> >> > be able to read this!". But big scary warnings are most certainly
bad. No
>> >> > matter what, before I claim anything for sure, I would like to see a
>> >> > double-blind study, and not change for the sake of change.
>> >>
>> >> This sort of argument might have worked in '12. Apologies, but you're
>> >> dreadfully late.
>> >
>> > I'm afraid snark wasn't my first language, so you'll have to refresh my
>> > memory: what are you referring to?
>> >
>> > Earlier in my message, I touched on the danger of users dismissing
warnings
>> > for their e.g. bank, because users become dismissive of "insecure
>> > connection" warnings found elsewhere.
>> >
>> > Now, we might have HSTS (for my bank example), but at that point we're
no
>> > longer talking about two tiers of security, but three
(insecure-bypassable,
>> > insecure-nonbypassable, secure), which is precisely what the proposal
is
>> > advocating moving _away_ from (or, the proposal is not very clear on
this
>> > point, and a clarification in its text would be necessary).
>> >> ...
>
>

Received on Saturday, 27 December 2014 06:36:36 UTC