Re: [fetch] Request for support for certificate pinning (#98)

> The one I think is a non-issue is the connection pooling part. Connections in a pool are connected to a specific host and have the TLS handshake done. That means there is a cert attached to it that either passes or fails.

No, this isn't correct. Connections might be pooled across multiple hosts (e.g. due to HTTP/2 Connection Pooling or HTTP/2 Alt-Svc, Session cache pools, etc).

For example, imagine ssl.example.com is serving Cert A via SNI, and it has Hash A. Now imagine foo.example.com is serving Cert B, also via SNI, which has Hash B. foo.example.com has a SAN that covers ssl.example.com, and so it is viable to be connection-pooled. That is, if a new request for ssl.example.com comes in, we can reuse the connection to foo.example.com. If we then reject it (e.g. because the App inappropriately pinned Hash A), then if we reconnected, we could get Cert A, because we wouldn't be re-using the pooled connection. This is but one example of many, and it's why I said that it would require the pools to be sharded.

> So if a request takes a connection out of the pool it would check it's own (origin based) list against the cert fingerprint. If it fails to match the request should be failed, establishing a new one is not an option (and the connection could go back to the pool, although at this point I would say that at least from one origin's point of view we are connected to an evil MITM).

And I think this is explicitly bad behaviour, which is why I have no desire to support it. There's a lot more edge case here that I think is well in the realm of "unspecified but common" behaviour, but I think the fundamental idea is unfortunately impractical.

> Note that I believe that this is an opt-in feature that will not be used by many applications.

I think that alone is reason to be somewhat suspicious. It's opt-in, limited use, high footgun, violates layering, adds complexity, and for what value? What alternatives exist? HPKP, certainly :) CT, arguably.

> unfortunately more often than not it is not only the user who is in control of the root certificates. This is part of the problem we're trying to solve: 

Right, here's where our fundamental disagreement will be readily apparent. I disagree with the philosophy that the browser can prevent code running on the same native OS, with the same or greater capabilities, from interfering. To that end, we even wrote up the Chromium policies we use to evaluate such requests to provide greater clarification of the general opposition - https://www.chromium.org/Home/chromium-security/security-faq#TOC-Why-aren-t-physically-local-attacks-in-Chrome-s-threat-model- and https://www.chromium.org/Home/chromium-security/security-faq#TOC-How-does-key-pinning-interact-with-local-proxies-and-filters- . The OpenID argument is a very similar argument to that made re: autocomplete="off" and letting sites subvert the user's wishes, and that's covered in https://www.chromium.org/Home/chromium-security/security-faq#TOC-Why-does-the-password-manager-ignore-autocomplete-off-for-password-fields- (and in general, is a violation of the priority of constituencies)

I totally understand where this feature is coming from - this is something like I said has been explored before (S-Links is one example, but you will find the argument goes back nearly a decade). I think the idea represents a serious footgun and violates the separation of principals - there's no way the user agent can know whether the site meets your criteria or not (even DNS is not sufficient), and the effects on effective connection management alone should be sufficient for an argument against it.


---
Reply to this email directly or view it on GitHub:
https://github.com/whatwg/fetch/issues/98#issuecomment-130090566

Received on Tuesday, 11 August 2015 21:59:40 UTC