Re: [whatwg/fetch] Impact of OSCP on SOP (#530)

I'm not sure "or the system it's built on" is in scope. For example, any application (and not just browsers) can cause these requests to be issued, so addressing this on an application-by-application basis (of which the browser is just another application in its host environment) doesn't seem a reliable or reasonable path.

I suppose I had hoped that there was a more articulated threat model. For example, it's certainly reasonable to suggest that allowing arbitrary application control of headers _would_ represent a risk, since that is an unbounded set of potentially hostile inputs. Similarly, making requests with ambient authority, or allowing access to data, does present risk. I had not thought of the "SOP" protections as part of restricting the set of networking requests that can/should be made, but merely how application-defined access/control behaved, given JS. I get the feeling there's a different interpretation you may be working with, hence the confusion about how this "violates" SOP.

For example, should the browser (or any application) take an obligation to protect a server from the defined semantics of RFC 6960 (or its predecessor RFCs for which it obsoletes)? No, I don't think so, because it would have to be universally enforced by all applications to be meaningful, and it's not. That is, put differently, it doesn't make sense to reimplement this functionality in the browser if, for example, loading a webpage with OpenGL will load a GPU driver that will fetch an XML DTD from an HTTPS URL and then trigger the OS subsystem for fetching it, thereby introducing the problem (which happened with at least one driver when interrogating about its DirectX capabilities).

For what its worth, the relevant specifications are
* https://tools.ietf.org/html/rfc6960#appendix-A (a defined Content-Type request header of "application/ocsp-request" to a constructed URL)
* https://tools.ietf.org/html/rfc5280#section-4.2.1.13 (a defined URL that SHOULD be `application/pkix-crl` as the Content-Type returned)
  * Note that this retrieval method _also_ supports retrieving over `ldap://` (implemented by Windows and macOS), retrieving over `file://` (implemented by Windows, including SMB shares), and retrieving over `ftp://` (implemented by Windows)
  * Note that these all use the URL parsing library of the OS, which is different than the browser (although the NSURL stuff is getting more aligned, the Windows stuff most definitely is not)
* https://tools.ietf.org/html/rfc5280#section-4.2.2.1 (Servers should supply `application/pkix-cert` or `application/pkcs7-mime`, except in reality, there's about a half dozen types, as it's not a MUST)


To be clear, I agree it's good to find out where we draw our security boundaries and how we draw them. But I think I disagree very much with the initial statement:
> all the pieces of the OS needed to make a browser are part of the browser ecosystem and have to be considered

I view a layered approach as being beneficial, just like we don't specify behaviours around, say, TCP Fast Open (which Edge implements, the Chrome team experimented with, and Firefox hopes to experiment), because that's handled by a different layer. Same with the HTTP vs SPDY or HTTP/2 discussions - those requests were left as exercises to the protocol. OCSP, AIA, and CRLDPs, plus any other requests related to servicing the "verify a certificate", are, to me, a call out to a blackbox where it's up to that implementation to define, whether or not it uses HTTP. This would be similar to a printer system that used UPNP (which uses HTTP) to discover printers.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/whatwg/fetch/issues/530#issuecomment-296221364

Received on Friday, 21 April 2017 15:24:14 UTC