- From: Fozi <notifications@github.com>
- Date: Tue, 11 Aug 2015 13:28:41 -0700
- To: whatwg/fetch <fetch@noreply.github.com>
- Message-ID: <whatwg/fetch/issues/98/130056735@github.com>
@sleevi Thanks for taking your time, you have made a lot of good points. The one I think is a non-issue is the connection pooling part. Connections in a pool are connected to a specific host and have the TLS handshake done. That means there is a cert attached to it that either passes or fails. You are also very correct that after the request has been sent it is too late for the check as the connection would have already leaked information. (You actually pointed out a bug in my implementation) So if a request takes a connection out of the pool it would check it's own (origin based) list against the cert fingerprint. If it fails to match the request should be failed, establishing a new one is not an option (and the connection could go back to the pool, although at this point I would say that at least from one origin's point of view we are connected to an evil MITM). So the source origin and the security policy don't affect the connection pooling. Also the fact that the cert fingerprint has to be checked early (before sending the request) means that there is little hope on monkey-patching this in later, it has to be done by the implementer or it can not be done securely. Note that I believe that this is an opt-in feature that will not be used by many applications. I have to agree with the points you made about the origin forcing a cert on the destination. This is for sure not something that a site operator should apply to all 3rd party services, otherwise links will be broken. Again, I don't think this is a feature that should be applied on all web apps and I don't really have a good answer, however let me present you with two scenarios that would mitigate this problem: 1) The 3rd party is not a 3rd party but simply just another domain of the 1st party or there is a close relationship (like a contractual agreement, think paid service) to the 3rd party. In this case one can assume that the fingerprints (as recommended by the HPKP, a set of two, the current one and a backup) should be up to date. 2) The 3rd party publishes (e.g. though HPKP) two sets of keys, and the 1st party updates those regularly, automatically, according to the caching rules. While this might fail if the MITM completely cuts off the 3rd party from the internet it will still work for the case where the MITM can only affect a section of the internet (e.g. in a country) and both the 1st party and the 3rd party are not in this section while the client is. Services that could make use of this could be authentication services or public communication services. As for your points on the counter proposal, I have to agree with you, at the point the request went through it is too late and the cert could mean information leakage. I agree that considering this it's not an option. I also agree that the user should be in control of their machine, unfortunately more often than not it is not only the user who is in control of the root certificates. This is part of the problem we're trying to solve: How to detect and prevent usage of a fraudulent certificate that seems completely valid from a chain-of-trust point of view. While I want the user to be in control of the data he is sending out I would also argue that e.g. an OpenID provider should be allowed to deny the user the successful authentication if the user is unable or unwilling to securely connect to the authentication endpoint. --- Reply to this email directly or view it on GitHub: https://github.com/whatwg/fetch/issues/98#issuecomment-130056735
Received on Tuesday, 11 August 2015 20:29:09 UTC