- From: peter williams <home_pw@msn.com>
- Date: Sat, 26 Feb 2011 19:41:57 -0800
- To: "'Ryan Sleevi'" <ryan-webid@sleevi.com>, <public-xg-webid@w3.org>
- Message-ID: <SNT143-ds1240C2DFD6B4441132F3B992DF0@phx.gbl>
Good. We hopefully all know that browsers today show a "mixed security" warning, when an HTML page "container" retrieved over https has images (and scripts, and css pointer) on URIs that are sometimes http, sometimes http. Sometimes those linked images (and scripts etc) are on the same server stem as the HTML page container, sometimes not. If a javascript call back opens up an https URI, this doesn't even get a UI warning, being outside the DOM security model. https has to address this. Webid protocol (as a revision/profile of https) has to address it. I've general found, few folks understand what happens in the https protocol - as it "multiplexes" multiple channels as a necessary consequence of working with hypermedia docs (and linked data , in general). Folks need to recall that a browser can maintain multiple parallel security channels with a website. Its not only that there may be multiple SSL connections outstanding (all keyed off a single SSL session/handshake). There may be 2 or n "groups" of connections to the one site, where each group of connections (all images, all scripts, all. say) cues of 1 particular session handshake. Perhaps, 2 sessions, and 8 connections, 4 connections per session perhaps. Each handshake has a distinct SSL sessionid - and different client certs may be in the state of that sessionid. Who controls this handshake process - that defines an SSL session, and gets a sslsessionid? The server. The server can decide to maintain 2 parallel SSL sessions (each with 4 connections say), and decides on each session handshake which CAs to request, and if client authn is required (or optional). If the images URIs need one assurance, it may say: only VeriSign certs are good (where the CA.cert forces uses of a browser smartcard or eID card, say). If the script URI need another (because of its object security policy), it may send no CAs" when requesting client authn, allowing self-signed certs to be selected by the user in the browser selector (and use software crypto, vs the smartcard say). This is the topic I want to get on the table, somehow. Its distinct from multiple handshakes on a given connection (secure resumption, or multiple handshakes on the same TCP/IP channel that revise the session requirements/keying). The related topic is that the metaphor of "click login" to move into https mode for user authn (during which client authn might deliver a client cert which name maps onto a server-side account, and thus sets a CGI security context). We need to be MORE than that "modal" use of using client authn. We have to ensure webid protocol works with "mixed browsing", not only the modal login sequence. On HTML, HTTP and web threats , I'll say nothing. First, let's focus on secure communications and channel theory, since its properly understood for years. Web threats. DUE TO linking and open hyperlinking (and exploit of interpreted javascript) is a different topic. That is addressed with signed javascript (coming soon, I feel, similar to signed activeX or signed java applets). From: Ryan Sleevi [mailto:ryan@sleevi.com] On Behalf Of Ryan Sleevi Sent: Saturday, February 26, 2011 6:49 PM To: 'peter williams'; public-xg-webid@w3.org Subject: RE: issue of initiating client auth for parallel SSL sessionids Hi Peter, It may help me to understand what you're proposing if you could describe the request flow using the HTTP semantics. I'm having a bit of trouble understanding your proposal, and that's making it hard to evaluate the security implications. Something like the simple sequence diagram at [1] would help greatly. My concern is that you're proposing that a user agent perform the WebID auth sequence over HTTPS/SSL, but then continue the browsing session through unsecured HTTP. This seems to defeat any guarantee of secure user authentication, which is why I'm wanting to make sure I've understood correctly. Two example attacks that would make such a proposal untenable are the injection of malicious scripts [2] or session hijacking [3]. The requests received over HTTP cannot be assured of the WebID accessing them, since the connection may be MITMed, and likewise, requests received over HTTPS may have been initiated by malicious script running downloaded via HTTP. Further, the idea of maintaining two independent SSL session IDs for a single domain is not something most user agents presently support (Firefox and Chrome come to mind). So while WebID by leveraging SSL client auth with a single identity is something that most every modern browser supports, and they will cache the (relatively expensive, computationally and network) TLS client auth stage, maintaining parallel sessions to the same domain, with distinct identities (smart card/eid and WebID) will most likely require browser vendors to change their networking implementations in order to support WebID. This is in addition to the WebID-specific provisions such as .crt handling/specialized Accept headers that seem to be proposed here. I would think that such requirements would prevent any widespread adoption of WebID, because it will require browser vendors to adopt it in order to be widely deployed, but browser vendors typically aren't likely to adopt WebID-specific modifications unless/until it is widely deployed. In order for WebID (or any really any Web-based authentication mechanism, for that matter) to be used securely, the requests, including the initial one [4] [5], need to happen over a secure connection (such as SSL). Once that connection is established, then the requests need to continue to happen over that security association if you're going to assume that identity remains correct. That is, you can only assume the WebID user is "logged in"/"authenticated" if/while every request originates over the HTTPS session that the WebID was provided over. If you're concerned about the desire to provide authn/authz via multiple certificates, then it should be possible with TLS secure renegotiation [6]. Because each subsequent renegotiation is secured/protected by the previous security establishment, a server could request multiple forms of authentication by sending a HelloRequest, and in the new handshake, requesting a different set of CAs in the CertificateRequest. Under such a scenario, a user can prove their possession of a WebID private key in one handshake and then, using that channel, prove their possession of a smart card-based private key in a subsequent renegotiation handshake. While such a scenario works at a TLS level, it will still likely require modifications to user agents to fully support, as it requires careful thought about the user experience, it has the benefit of accomplishing the same goal without being WebID-specific. Thanks, Ryan [1] http://www.w3.org/wiki/Foaf%2Bssl [2] https://www.blackhat.com/presentations/bh-usa-09/SOTIROV/BHUSA09-Sotirov-Att ackExtSSL-PAPER.pdf [3] http://en.wikipedia.org/wiki/Firesheep [4] http://www.thoughtcrime.org/software/sslstrip/ [5] http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security [6] http://tools.ietf.org/html/rfc5746 From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org] On Behalf Of peter williams Sent: Saturday, February 26, 2011 8:21 PM To: public-xg-webid@w3.org Subject: issue of initiating client auth for parallel SSL sessionids Because of the history of FOAF+SSL, we tend to see demos in which folks goto a site with http, and then use a login button - guarding a protected region of the site (or protected modes). I think we need something more general. As one browsers page index.html, should there by a file X referenced (call it .crt), let the browser connect to its server using https (for that file GET, only). Presumaly, if browser knows the mime type of .crt, it populates the accept header with something suitable. What I want is that the validation agent only kick off when it receives a particular accept header ( induced by a containing page reference that forced population of that accept header on the resource retrieval attempt). Webid protocol would then run (and setup an SSL sessionid), but https would not be protecting the connections to other page elements. As one moves through a site, the SSL sessionid (due to webid protocol) can still guard access using an authorization logic. What this allows is both classical client authn (using smartcards, in DOD land) and webid client authn. Now, it easy for the site to maintain 2 distinct SSL sessions, 1 with CA's controlling the selection of certs (which hits the smartcard/eID) and 1 which does leverages webid. Those SSL connections on the same site supervised by the smartcard/eID SSL sessionid obviously leverage smartcard/eID's crypto, doing SSL connections that offer channel encryption using the *assured* crypto of the card (and applying CA-based certg chaining authn .merely to protect the channel's encryption SA). Those SSL connections on the same site supervised by the webid SSL sessionid are distinct, influencing "login" authentication and "web sessions" - driving an authorization engine (perhaps based on federated social network conceptions)
Received on Sunday, 27 February 2011 03:42:57 UTC