W3C home > Mailing lists > Public > public-webappsec@w3.org > December 2014

Re: Public Key Pinning (was Re: [blink-dev] Re: Proposal: Marking HTTP As Non-Secure)

From: Ryan Sleevi <rsleevi@chromium.org>
Date: Sat, 27 Dec 2014 15:20:29 -0800
Message-ID: <CACvaWvZSzjJFVPUUOiiQNwenH27Q6GjMLf+Tt6XfLvntr8V1=Q@mail.gmail.com>
To: Jeffrey Walton <noloader@gmail.com>
Cc: security-dev <security-dev@chromium.org>, public-webappsec@w3.org, blink-dev <blink-dev@chromium.org>, "dev-security@lists.mozilla.org" <dev-security@lists.mozilla.org>
On Dec 27, 2014 3:12 PM, "Jeffrey Walton" <noloader@gmail.com> wrote:
>
> Hi Ryan,
>
> Sorry about the extra chatter.
>
> >>> The control to stop most of the intercept related attacks - public key
> >>> pinning - was watered down by the committee members to the point that
> >>> the attacker effectively controls the pinset. (Here, I'm making no
> >>> differentiation between the "good" bad guys and the "bad" bad guys
> >>> because its nearly impossible to differentiate between them).
> >
> > To Jeffrey: can you please stop the ad hominem attacks
>
> The authors should not take it personally. I've taken care not to name
> any names or teams. In the end, its not the authors but the
> organization bodies like the IETF.
>
> Holding an author responsible is kind of like holding a soldier
> responsible for a war. The buck stops with organizations like the
> IETF. Or in the case of war, with the politicians or leaders. In both
> cases, its a failure of leadership.
>
> In this thread (
https://www.ietf.org/mail-archive/web/websec/current/msg02261.html),
> Chris Palmer suggested using shame as a security control. I get what
> he was saying. When the IETF approves an externaltiy to control
> security parameters like they did in this case, then they should
> expect a little shame. Sunshine is the best disinfectant.
>
> > Especially when the
> > three authors have all worked on Chromium, and two are actively
championing
> > this proposal?
>
> Three points here.
>
> First and foremost, YES, the authors have done good work.
>
> Second, there are some gaps and I think things should be improved.
>
> Things should be improved because we have a pretty good idea of how
> bad things can be (are?) because of Snowden. Its not just nosy
> organizations, nosy OEMs and manufacturers, and oppressive regimes -
> its friendly regimes, too. I could be wrong, but I think that includes
> just about everyone.
>
> We also know how to improve them, so no one is working in a vacuum
> here. There's nothing bleeding edge about this stuff.
>
> Third, as a side note, I *personally* want things improved because I
> want to use and rely on this control. This is *not* me arguing
> theoretically with folks. I often don't have a say in the application
> type (web app, hybrid app, native application), so I'm always
> interested in improving the web apps because they are security control
> anemic.
>
> > This sort of revisionist history does no one any good. It is
> > a simple threat model: If you give up administrative access to your
physical
> > device, it is no longer your device.
>
> Three points here.
>
> First, history is important and this issue is significant. The issue
> is significant because it was a big leap forward in security. When the
> issues are raised publicly, they can be addressed. Again, sunshine is
> the best disinfectant.
>
> Second, at the design level, this particular risk that can be
> controlled. Pinning the public keys is the control under many
> scenarios. But a modified pinning scheme was offered, which (if I am
> running through the use cases properly), pretty much leaves the
> original problem untouched.
>
> Third, it''s a leap: the site never gave anything away. It was taken
> away from them and given to an externality. I even argue the user did
> not give anything away. Most users don't have a security engineering
> background, and therefore cannot make that decision. In this case, it
> was *surreptitiously* taken away from them and given to an
> externality.
>
> > The thing you lament missing due to
> > some shadowy committee members (hi! No shadows here!)
>
> Guilty. I do have a suspicious mind :)
>
> > ... if your device is physically owned, it is
> > physically owned, and no remote server can express a policy that clients
> > will not be able to override, short of Trusted Computing and Remote
> > Attestation
>
> OK, two points here.
>
> First, pleading one short coming (such as all software has flaws, like
> the firmware, loader and OS) and then claiming its a reason another
> flaw (its OK for my application to be defective because there are
> lower level defects the attacker can use) is simply bullocks. If
> that's really the argument being made, then do away with HTTPS
> altogether because there's always going to be a flaw somewhere (even
> in HTTPS/PKI{X} itself).
>
> Second, the attacker is a network attacker, and not a physical
> attacker. So while the device may be owned (i.e., its got an
> untrustworthy CA; or the firmware, loader, OS and application has
> flaws), the attacker is only effective at the network level. In this
> case, the software can provide effective controls to reduce risk.
>
> I am aware there are residual risks. If the attacker steps up his
> game, then we will look at other controls.
>
> > I've avoided commenting all of the other times you've misrepresented how
> > this came to be, but lest it be seen that our silence is assent, I have
to
> > at least call out this dissent.
>
> I think its good that you raised the counterpoints.
>
> ----------
>
> As a post script, I have two open questions. Perhaps you can help set
> the record straight for posterity.
>
> First, the open question of: why was circumvention added and why was
> the policy element to stop circumvention taken away? In this thread
> (https://www.ietf.org/mail-archive/web/tls/current/msg14722.html),
> Yoav Nir claimed the policy element was removed because there was no
> support for it. But that's a symptom, and not the reason.
>
> I suspect it is primarily related to advertising, buts its just
> speculation. Under the Advertising Theory, revenue is generated when
> the message gets through, so the message must always get through.
> Stopping the message because the channel is known insecure is not an
> option for the business model.
>
> Second, the open question of: why is the application relying on the
> platform to perform pinning in a TOFU scheme? Why is the application
> itself not allowed to perform the pinning at the application level? If
> the application pins, it no longer a TOFU scheme because the
> application is leveraging its *a priori* knowledge.
>
> For example, WebSockets does not provide methods to query connection
> security parameters. With things like trusted distribution channels,
> application stores and side loaded trusted code, I don't have to worry
> too much about tampering in transit. That means applications like the
> Self Serve, Single Password Sign-On Change application can be assured
> with a high degree of certainty its passing its high value data to the
> right server, and not some middleware box or an imposter.
>
> Jeff

You seem to be operating from quite a bit of confusion about pinning or how
it works, and from that have drawn a number or inaccurate (and, in some
cases, inflammatory) conclusions.

Rather than cross-posting to a variety of lists on an unrelated thread,
perhaps it best to continue the discussions in the IETF.

There is no shadowy committee politiking for compromise waiting for you
there - it was simply a bad and inherently inconsistent idea removed,
recognized as such by technically savvy people during the process of
standardizing.
Received on Saturday, 27 December 2014 23:20:58 UTC

This archive was generated by hypermail 2.3.1 : Monday, 23 October 2017 14:54:08 UTC