Re: [fetch] Request for support for certificate pinning (#98)

> This means you can't use a cert that would not be the one presented to you if you would have connected to the host in the URI in the first place.

This is simply an incorrect and wrong reading of the section. It doesn't mean the certificates need to be the same. It means exactly what was written - the certificate presented must satisfy the checks. This is not confusion on my part - I'm quite familiar with both the Firefox and Chromium implementation, having helped maintain the Firefox behaviour (which has further pooling _independent of HTTP/2 and has for years_) and reviewed and implemented the Chromium behaviour. As I indicated before, that's merely one example; if you want another, http://mxr.mozilla.org/nss/source/lib/ssl/sslnonce.c#276 goes back over 15 years of behaviour for NSS, reusing TLS session caches for independent hosts, provided the IP matches and the certificate _satisfies any checks the client would perform_.

> But even if this case was allowed, it would be irrelevant to this discussion because as I wrote in the first post the user must be able to pass not only one fingerprint but an array of acceptable fingerprints, 

No, you entirely missed my point. This is about the practical reality that an arbitrary third-party (e.g. NOT the server operator) is *NOT* in a position to determine what the appropriate pinset for a properly configured server is, because they're *NOT* in perfect knowledge of the set of certificates issued for a domain nor what is authorized. Certainly, this arbitrary third-party could make a best-effort stab at it, but all that does is arbitrarily and unnecessarily limit the server operator's ability to configure their server.

We're discussing about the real and practical flaws with such a proposal, and why it's been repeatedly and rightfully rejected (again, I encourage you to review past discussions of S-Links or the several predecessors).

> the discovery server points the client to a server that provides the service it is looking for

Here's the fundamental problem, again restated. This discovery server is an arbitrary third-party entity, with no authorization to speak for the domain in question, nor necessarily any relationship to it, on a business or on a technical level. The failure of the discovery server to properly serve the keys either interferes with connections other than the one being made (this, again, is why we we're even discussing socket pools), or it requires additional pools (such that the only connection it affects is those it directs the user to).

Both of these are unnecessary complexity, but let's presume we introduced more pools and continue the thought experiment. This arbitrary discovery domain, having configure things such that it only affects the Fetch in question and with no global side-effects, has now limited the fetch() to either fail or succeed based on third-party-dictated security policy. While you may see this as a win, I see it as a loss. Imagine that 30% of a site's customers decided to employ this mechanism, because they found a bad example on StackOverflow that suggested they do it, and the example pinned the CA to "Foo CA". Now, "Foo CA" charges 10x of what "Bar CA" costs, and our hapless site only went with "Foo CA" because the old admin was naive and didn't realize there were cheaper options. The site goes and gets a cert from "Bar CA" - and suddenly sees their traffic - and revenue - drop 30%, because all these sites have wrong pins.

Now, the optimist in us might way "Gee, everything is working as intended; the site just needs to tell everyone to update their pins" - except the site has zero relationship with those arbitrary third-party sites (including the discovery server), and a benign change on the operator's part has now broken all of those links (thus defeating the _primary goal_ of URLs to begin with). And recall, we only got to this point in our narrative because we handwaved away a number of problems that can't be handwaved away.

These concerns have been discussed - repeatedly - every time someone pokes a proposal like this. I can think of at least three past proposals. It's quite similar to the problems of subresource integrity (which, to be clear, I also have issues with), except on a whole new level, because the third-party *does not* have the same knowledge of the server operator and cannot simply observe as they can with hashes.

If that realistically is your threat model, then it should be on origins you control and operate - that's how you assure no MITM. This introduction problem is not merely one of technical ability, but in respecting the decentralized and disjoint nature of the web, and treating every party as potentially hostile. In such a model, such features are far more dangerous than good, and "Just give me a footgun, I promise not to shoot myself" is an argument that has repeatedly failed to play out as intended/promised.

---
Reply to this email directly or view it on GitHub:
https://github.com/whatwg/fetch/issues/98#issuecomment-130735958

Received on Thursday, 13 August 2015 15:53:45 UTC