- From: peter williams <home_pw@msn.com>
- Date: Mon, 28 Feb 2011 03:43:27 -0800
- To: "'Ryan Sleevi'" <ryan-webid@sleevi.com>, <public-xg-webid@w3.org>
- Message-ID: <SNT143-ds1142931F28032898F7D06E92DE0@phx.gbl>
Everything you writeup makes a lot of sense. If I was in IETF, the rationales would be pretty self evident; and we could improve the https RFC considerably. Now, forget my notion that a client might indicate a desire to fire up webid protocol run by sending a particular accept header - though I return to this topic ultimately, below. What I wanted to do also was introduce parallel SSL sessions just for a given page, and then get to grips with a) mixed content, and b) contemplate how the multiple sessions would interact with webid. I do that specifically because of the semantic web'ness of this group of https users. Of course, life is easy if the focus of webid is interaction with a single site (all under https anyways, say), with a login button (firing up wbeid protocol, and the SSL client authn full handshake) that changes a web application's mode. The resulting web application is a classical IDP. If one goes to myopenid.com, for years it has allowed one to mint a client cert; authenticate to the IDP site using SSL client authn; and then release an openid assertion to an openid consumer in the classical websso style flow between 2 websites: resource site and IDP. The cert is stored deep within myopenid's profile for users, hidden from users; and that local cert store is presumably used when validating https client authn. Now, what myopenid.com doesn't do (today) is, when acting as the validation agent, bother to check in the foaf card referenced in the cert's SAN_URI field whether the cert is present. Should it, obviously all the myopenid users could use webid protocol as an optional authentication protocol when performing user auth to an IDP, and still use openid auth/ax protocol to work with all the n openid consumer sites. The SAN URI (confirmed by the IDP acting as an webid VA) could easily be sent to relying party sites, as an openid assertion name or ax attribute, enabling apps to make further exploitation of foaf and rdf, etc. Life is simple. If one wants, strip out the term openid above and now write the term SAML2 or the term ws-fedp; makeing no material difference to the statements. Generally, this IDP pattern that all the FOAF+SSL demos exhibit. A more semwebby relying party might avoid the RP<->IDP websso flow, and natively consume the webid protocol. Now, it's that "more semwebby" case I want to focus on (remembering this is a RDF centric group, and has more than one members with specific interest in "linked data" leveraging ONLY RESTful modes of web usage, where modes and states and sessions are "not encouraged"). To get there, we have to look at openid, and see what worked and failed. In my view, it was a total failure - despite what would normally be classified as wonderous adoption by the likes of Yahoo and Google IDPs. (what a coup! Openid has captured the most widely accessed home page on the planet!) So, Why do I say that about the movement? Because the part that WAS all about user-generated content got lost, leaving only corporately managed content/profiles. I'm generalizing a fair amount, trying to look at the major adoption trends (not what's in engineering specs or demos between 1 man and his goat). Here, with this initiative coming from the foaf project, folks prpobably want to see "some" of the user-centric flavor be retained. I don't feel folks are exactly on anti-corporate rants; but they are concerned about the "politics of control." Should a lowly user thus create an RDF graph (in a hand-written foaf card) that cites lots of https URIs including URIs of other foaf cards (quite typical!), what does it mean for a machine to crawl the file's URI pointers - and thus de-reference the URIs - all 100 of them say; on different domains, different ports, different URI stems? Assume the user is largely clueless technically, and might well not follow best security practice. Perhaps, he is 16, web savvy, lives in the Sudan, and has the education of an 8 year old, in the US, with similar access to funds (being the Sudan where folks earn $2 a day). If I click on a webid, what do I expect to happen? Do I expect a webid protocol flow to occur? Well, if we look at foaf.me, that is NOT what happens. It shows public elements of the foaf card, no login required. If one does a modal login (with the infamous login button), it will then show the public/private elements of the same card. Its login button happens to be a websso demo - that could be talking openid auth protocol to myopenid (if myopenid was webid powered as an IDP). Now, let's say Im a foaf group crawler, a machine setup to crawl and then cache (in a "trusted" cache using Lamspon's theories about secure channels) all my friends PRIVATE cards. I can gain access to the good private stuff, because I'm authorized to do so as a particular foaf group member and because of following/follower relationship between foaf cards. The authorization quality is good, but access enforcement is not military or commercial grade (not needing to be). Its webby; I expect you to honor the no-trespassing sign and the symbolic fence; please don't hack through it (though obviously you can, if you bring a chainsaw). How do we accomplish this, using webid protocol? It's a machine consumer acting as a foaf person, not a human person full of energy and vigour (after working 8 hours for $2). Perhaps the machine is a server, acting for a user; OAUTH like (or proxy cert like). How does the machine UA invoke webid protocol, so as to get access rights to the private graphs and then pull them - simply to act as a foaf card crawler and cacher? Surely, it doesn't have to have a custom script, knowing about each particular foaf agents programming of a URI, that fires up off login button's event handler! Why do we care about this? Isn't it enough to simply succeed with the IDP model (saving the planet from everyone having 50 passwords)? I argue that we care because its not a semweb solution, until we have machine readability. Its just a classical web solution. For web solutions we don't need W3C or IETF; one just hires one of a hundred thousand web developers, and one scripts it up the webid validation authority, in a day. Ive already done this 50 times in US realty over the last 4 years, giving site developers a toolkit that offloads all the protocol work to a server (that acts as their VA). Hopefully, they will buy the same thing from Microsoft Azure's ACS fabric solution soon (a multi-tenant VA), instead of me hosting their VA server. But, we have not got to the semantic web. We really have not address it much in the spec yet. But let's not forget that beyond the protocol there is the (non PKI, non GSA bridge CA) trust model to address. I'm not sure the current spec will even touch this topic. But, its in the back of my mind, for one, that once the VA protocol is done, focus returns to the foaf elements of the project (and less on the openid/ssl side). At that point, it's all about understanding, in a UCI world, how foaf cards acts as what windows would call a cert store. It's not a cert store in the registry or AD or a PKCS7 stream in a COFF header - but a cert store in a foaf card that sits on the web, and has private-ish elements (like the graphs of who else is which of my foaf groups, that I define). Finally, when I read the proxy cert RFC, I like it for its broad scope. It's worth reading, here; as its really about a more complex https uses - in a multi-agent world. This fits very much with our assumptions here: that foaf cards ultimately get managed by (a billion) foaf agents - in a multi-agent world. And, it's doing with certs and https channel composition what OAUTH has just shown in the web2.0 world is VERY desirable. From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org] On Behalf Of Ryan Sleevi Sent: Sunday, February 27, 2011 1:31 PM To: 'peter williams'; public-xg-webid@w3.org Subject: RE: issue of initiating client auth for parallel SSL sessionids Hi Peter, When defining a new protocol, especially one that is meant to operate in the context of or in conjunction with a user browser, I don't think you can so easily dismiss the reality of market forces and deployments. In the IETF space, this is always central when discussing/designing a protocol. Just look at how long WebSockets has taken/is taking to emerge, even with the substantial support of browser vendors, site operators, and users. It's taken so long precisely because of the need to make the protocol work with existing infrastructure, as that is a pre-requisite for anyone being able to adopt/deploy it at any large scale. So when we talk about WebID Protocol as a means of identification/authentication, it's absolutely essential to keep in mind the environments that are deployed today. The communication between the Identification Agent and the Validation Agent should, as much as possible, be something that will be compatible with the largest possible environment. If I have to customize my network or software in some special way in order for the Identification Agent and Validation Agent to communicate, then you'll see the WebID Protocol/FOAF+SSL dead in the water. While the discussion about SSL session IDs is certainly useful to a degree, I still feel I must respectfully request you explain your original proposal. I feel that we have diverged on a tangent that, while interesting, has still left me with great confusion about what you were proposing with respect to File X, Accept headers, and mixed content. I just want to make sure that our interesting discussion here doesn't miss an opportunity to better understand what you were proposing in your original e-mail and the requirements it might impose on user agents. For TLS, as I mentioned in the previous e-mail, most implementations are relying on some form of key to session ID mapping. When an SSL session is established with a peer, and the key of the new session matches an existing key, then the client will propose to the server that an abbreviated, resumption handshake take place. This key may be supplied by the application to the TLS layer directly (such as on Secure Transport on OS X or NSS as used by Chrome/Firefox) or it might be inferred from other data supplied (see InitializeSecurityContext's pszTargetName, for Windows, which informs the certificate validation routines AND the SSL session cache). What you find is that most browsers are constructing the key off [host, port] (or in IE, [CredHandle, Host]). So if you have one host, with two different ports, then they will keep two session IDs in the cache. However, the Port A session ID will never "leak" to Port B. Likewise, Host A and Host B can also have independent session IDs, so connections to Host A will propose ID 1 and connections to Host B will propose ID 2. So let's look at three examples, and what may happen: Example 1) Document is hosted at https://example.com/index.html. It directly references an image at https://example.org/image.jpg and a script element at https://example.net/script.js [Differing hostnames, same ports] Example 2) Document is hosted at https://server.example.com/index.html. It directly references an image at https://image.example.com/image.jpg and a script element at https://javascript.example.com/script.js [Differing subdomains, same ports] Example 3) Document is hosted at https://example.com/index.html. It directly references an image at https://example.com/image.jpg and a script element at https://example.com/script.js [Same hostnames, ports] For this example, we'll assume that index.html is a resource restricted to the credentials stored on the smart card. Likewise, image.jpg is a resource restricted to the/a WebID associated with the user. Further, when I refer to "Session ID" below, it is not the literal value (assigned by the server), but a monotonically increasing value representing the number of session IDs that a client has seen. So one full handshake results in Session ID 1, and the next full handshake (regardless of peer) will result in Session ID 2. Example 1) Client's SSL session cache looks like [ [ [example.com, 443], Session ID 1], [ [example.org, 443], Session ID 2], [ [example.net, 443], [Session ID 3] ] ] Example 2) Client's SSL session cache MAY look like [ [ [server.example.com, 443], Session ID 1], [ [images.example.com, 443], Session ID 2], [ [javascript.example.com, 443], Session ID 3] ] or it MAY look like [ [ [example.com, 443], Session ID (1 or 2 or 3)] ] Example 3) Client's SSL session cache looks like [ [ [example.com, 443], Session ID (1 or 2 or 3)] ]. Retrieval of image.jpg may fail, or the user may be prompted 2-4 times to select a certificate, needing to alternate between WebID and Smart Card. Example 2 and 3 both depend on the timing/ordering of requests, specifics about the TLS library being used, the type of certificate configured on the server, and how the server behaves when it wants to request the "Smart Card" vs "WebID" credentials. However, while this is all interesting and well, I don't think that this particular "problem" is one that needs to be dealt with specifically at a WebID level. It is a combination of aspects of the TLS protocol itself and client library behaviours that can and are addressed outside the scope of specific WebID-specific behaviours. IF a server operator wishes to identify a user with multiple certificates, there are a number of ways they can do so without having to invoke any WebID-specific behaviours. WebID + smart card is no different than smart card a + smart card b, from a TLS perspective, which for better or worse is possible, just bothersome. Since you mention IDPs like Google, Facebook, etc, I think it should be pointed out that their value as an IDP comes from the fact that they are widely used by millions of users. They are widely used by millions of users because millions of users CAN use them widely; that is, there are no particular network or software requirements, beyond compliance to standards established a decade ago or longer. Further, such sites still have to deal with and work around software that doesn't even adhere to those standards - all of the downlevel checks and fallbacks - which further expands their potential and actual deployment size. In talking about WebID, while it is a great opportunity to be truly innovative at the Validation Agent side, I cannot reiterate enough the need that the communication between the Identification Agent and Validation Agent happen in a nice, "unsurprising" way, that can work without any changes to the wide number of legacy deployed user agents (think about how wide IE6 still is), and on a wide variety of platforms (such as mobile space, which sees infrequent updates). As it stands, the protocol described today does afford that, albiet with some UI issues (WebID-ISSUE-15 and WebID-Issue-14). This is why I want to understand your original proposal so much, because as I understood it, you were proposing behaviours/requirements that would break this compatibility. This concern would also apply to things such as changing the TLS handshake (WebID-ISSUE-19), since both Identification Agents and Validation Agents would need to have their TLS libraries updated to support such extensions. In a practical sense, this would make it untenable for deployment as an existing auth method, which would be very unfortunate to its utility as an general purpose authn/identification method. Regards, Ryan From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org] On Behalf Of peter williams Sent: Sunday, February 27, 2011 9:56 AM To: 'Ryan Sleevi'; public-xg-webid@w3.org Subject: RE: issue of initiating client auth for parallel SSL sessionids Im very happy with this thread, mostly as it educates us all on https' nature. We are getting beyond SSL (handshakes and security associations), and moving into https. https is all about linked data. I don't know what most browsers do, I only focus on windows - in which the native browser is just an artifact of an OS - an application like any other. https is an OS service, which has to support many https consumers (including the one in a frame in a window, known as IE). As an evaluated OS, the vendor has to make strength and assurance claims for its components, including networking component of the system. The classical browser is just an https consume. The system http proxy (per user) is another https consumer. Thi relation between application (browser,proxy) to https service invokes the architecture of the original SSL model (secure "socket" layer, remember) in which SSL is delivered to consumers of socket "services" implemented by a stack of protocols behind the socket - to whoever has access rights to that particular socket and its particular sub-layer configuration. So, I need help now. Tell me what happens in Mozilla or Opera (say) when I visit a page on a server at https://server.com/index.htm and the page comes back with (1) an auto-rendered image tag whose href is https://images.com/image.jpg AND (2) an auto-evaluated javascript reference to https://libs.javascript.com/analytics.js Now, how many SSL sessions so we have in the one browser instance, for one page? How many SSL sessionids are there, in the SSL cache at the browser? If each of those sites (server.com, images.com and libs.javascript.com) did the webid protocol run (making 3 for index.htm's delivery, rendering and evaluation) AND each server asked for client certs and SSL clientauthn, what will happen at the browser and it's client-side SSL cache? (for just that one 1995-era page, recall.) If server.com sets the SSL handshake requirement that client authn MUST use a CAs that links to eID smartcards, then surely the supproting smartcard and its cryptomodule would be used by the browser that pulls the index.htm hypermedia document. Thus, the assurance of that channel (retrieving server.htm) is that of the smartcard's crypto module since its chips are doing the ECC and AES ciphers, not the i686 chip running the browser and https component) If images.com sets the SSL handshake requirement that client authn may use any CA or rather no CA (i.e. the webid case) this pops up the per user's cert selector (the first time), if the user has multiple certs that match the CA rules. (On opera, the cert selector appears when there is 1 or more match, for reference.) Now, of course its true that a client can refuse to accept the server's requirements. As a result, communication will typically just not happen over https - a gross "access denied" case by default, since the client is simply denied access to the communication channel at the server's endpoint, never mind the resources behind that port. Now you don't have to be formal with me. Be informal in the writing style, and feel free to imply that Im wrong, stupid, or just mis-understanding. I say such things about myself about 3 times a week (typically, because its true). What we will have to do, at some point, is reduce it to testable cases, once we find an interesting cases through debate. It's a very "middle-in" design space (vs top-down say), in my view . Its also in my mind that WHATEVER browser vendors do today, they don't have to do tomorrow. Apparently infocard is out, but signed wrap/json is coming in. Websso's site-site ping/pong protocols were a dead space 2 years ago (though we used it heavily in US realty), but will explode this year in the SAAS space - now Google, Facebook, Live etc are all IDPs. In general, Webby folks keep reminding me that this is not IETF. Its W3C, where one is finding novel conditions for viral takeoff, with executed by anyone with as few hinderances as possible (and this might means a new crypto/https model). For all I know, facebook will make a browser with 50% marketshare, within 3 years time. From: Ryan Sleevi [mailto:ryan@sleevi.com] On Behalf Of Ryan Sleevi Sent: Saturday, February 26, 2011 10:32 PM To: 'peter williams'; public-xg-webid@w3.org Subject: RE: issue of initiating client auth for parallel SSL sessionids Hi Peter, I respectfully have to disagree with you that the server controls the handshake process. This is because TLS is a client-initiated protocol, which always begins with the ClientHello. The server's ability to negotiate a security association is dependent on the parameters first specified by the client, which include the SSL session ID. If a client only ever provides a single session ID (and most browsers simply implement their SSL session cache by storing a single session ID in a map keyed off host/port being accessed), then the server will only ever be able to resume that single SSL session ID. The server can certainly choose to reject the session ID, causing a new, full SSL handshake to take place and a new session being established, but beyond that it cannot influence the SSL session ID the client may propose. Few, if any, user agents/browsers are capable of maintaining parallel secure sessions, with one session authenticated via a smart card credential and the other identified via a WebID, to the same server. Because of this, if you wish to both authenticate a user (via their smart card) and somehow identify them (via their WebID), then you absolutely need to be speaking about SSL renegotiation, or you have to be talking about stateful, application-specific knowledge (such as HTTP cookies) However, rather than diverging into a discussion of the multiplexing of HTTP, SSL sessions, cookies, and concerns such as multithreading, I'd more like to focus on your statement that "We have to ensure the WebID protocol works with mixed browsing". To be clear, my understanding of the "mixed browsing" that you're talking about is specifically requesting resources over both HTTP and HTTPS. If I'm mistaken, please clarify what is meant by the term there. As it stands, you cannot mix both HTTP and HTTPS requests while also being able to relate a particular (HTTP) request to a given identity. If you wish to securely authenticate an identity, you must be performing it over HTTPS, exactly because of all the "HTML, HTTP, and web threats" seemingly dismissed. In your original message, you stated "As one moves through the site, the SSL session id (due to webid protocol) can still guard access using an authorization logic". However, if those requests aren't happening over HTTPS, then they are not guarded/authenticated/authorized in any way. If they are happening over HTTPS, then there is no need for a special file or header - the TLS session itself provides all of the identity and security assurances necessary. If you wish to map a single user agent to multiple identities (smart card, WebID), then the server must be prepared to perform SSL renegotiation for any request it receives over a given communication channel. Another piece that concerns me is how it relates to Issue 18 in the tracker. If the constraint of HTTP+TLS as the transport between the Identification Agent and the Verification Agent is removed, so as to allow arbitrary application protocols, then it would seem that HTTP-specific semantics don't fit in the specification. Any dependency on application-layer specific behaviour seems like it speaks to a weakness in the protocol itself, and should be solved in an HTTP-agnostic way. I feel that I have a very good grasp of TLS, smart cards, HTTP, authentication, and security, and must admit that I'm quite confused as to what exactly you're proposing and trying to accomplish, which is why I was hoping for more explanation, perhaps with a concrete workflow. As it is, you've proposed a means of maintaining connections as multiple simultaneous client identities (something most browsers do not support), a magic file (File X, aka .crt) which is meant to convey some piece of information that isn't immediately clear, a special MIME type sent in the Accept headers also meant to convey some contextual piece of information, and a means of authentication over HTTPS for HTTP resources, a path that seems to go counter to the path being taken by seemingly every new security protocol. I'm not trying to dismiss or misrepresent the proposal, I'm just trying to understand in very concrete terms what you're proposing as the expectations are for an Identification Agent that wishes to speak HTTP as the protocol, and the security guarantees that are afforded. Thanks From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org] On Behalf Of peter williams Sent: Saturday, February 26, 2011 10:42 PM To: 'Ryan Sleevi'; public-xg-webid@w3.org Subject: RE: issue of initiating client auth for parallel SSL sessionids Good. We hopefully all know that browsers today show a "mixed security" warning, when an HTML page "container" retrieved over https has images (and scripts, and css pointer) on URIs that are sometimes http, sometimes http. Sometimes those linked images (and scripts etc) are on the same server stem as the HTML page container, sometimes not. If a javascript call back opens up an https URI, this doesn't even get a UI warning, being outside the DOM security model. https has to address this. Webid protocol (as a revision/profile of https) has to address it. I've general found, few folks understand what happens in the https protocol - as it "multiplexes" multiple channels as a necessary consequence of working with hypermedia docs (and linked data , in general). Folks need to recall that a browser can maintain multiple parallel security channels with a website. Its not only that there may be multiple SSL connections outstanding (all keyed off a single SSL session/handshake). There may be 2 or n "groups" of connections to the one site, where each group of connections (all images, all scripts, all. say) cues of 1 particular session handshake. Perhaps, 2 sessions, and 8 connections, 4 connections per session perhaps. Each handshake has a distinct SSL sessionid - and different client certs may be in the state of that sessionid. Who controls this handshake process - that defines an SSL session, and gets a sslsessionid? The server. The server can decide to maintain 2 parallel SSL sessions (each with 4 connections say), and decides on each session handshake which CAs to request, and if client authn is required (or optional). If the images URIs need one assurance, it may say: only VeriSign certs are good (where the CA.cert forces uses of a browser smartcard or eID card, say). If the script URI need another (because of its object security policy), it may send no CAs" when requesting client authn, allowing self-signed certs to be selected by the user in the browser selector (and use software crypto, vs the smartcard say). This is the topic I want to get on the table, somehow. Its distinct from multiple handshakes on a given connection (secure resumption, or multiple handshakes on the same TCP/IP channel that revise the session requirements/keying). The related topic is that the metaphor of "click login" to move into https mode for user authn (during which client authn might deliver a client cert which name maps onto a server-side account, and thus sets a CGI security context). We need to be MORE than that "modal" use of using client authn. We have to ensure webid protocol works with "mixed browsing", not only the modal login sequence. On HTML, HTTP and web threats , I'll say nothing. First, let's focus on secure communications and channel theory, since its properly understood for years. Web threats. DUE TO linking and open hyperlinking (and exploit of interpreted javascript) is a different topic. That is addressed with signed javascript (coming soon, I feel, similar to signed activeX or signed java applets). From: Ryan Sleevi [mailto:ryan@sleevi.com] On Behalf Of Ryan Sleevi Sent: Saturday, February 26, 2011 6:49 PM To: 'peter williams'; public-xg-webid@w3.org Subject: RE: issue of initiating client auth for parallel SSL sessionids Hi Peter, It may help me to understand what you're proposing if you could describe the request flow using the HTTP semantics. I'm having a bit of trouble understanding your proposal, and that's making it hard to evaluate the security implications. Something like the simple sequence diagram at [1] would help greatly. My concern is that you're proposing that a user agent perform the WebID auth sequence over HTTPS/SSL, but then continue the browsing session through unsecured HTTP. This seems to defeat any guarantee of secure user authentication, which is why I'm wanting to make sure I've understood correctly. Two example attacks that would make such a proposal untenable are the injection of malicious scripts [2] or session hijacking [3]. The requests received over HTTP cannot be assured of the WebID accessing them, since the connection may be MITMed, and likewise, requests received over HTTPS may have been initiated by malicious script running downloaded via HTTP. Further, the idea of maintaining two independent SSL session IDs for a single domain is not something most user agents presently support (Firefox and Chrome come to mind). So while WebID by leveraging SSL client auth with a single identity is something that most every modern browser supports, and they will cache the (relatively expensive, computationally and network) TLS client auth stage, maintaining parallel sessions to the same domain, with distinct identities (smart card/eid and WebID) will most likely require browser vendors to change their networking implementations in order to support WebID. This is in addition to the WebID-specific provisions such as .crt handling/specialized Accept headers that seem to be proposed here. I would think that such requirements would prevent any widespread adoption of WebID, because it will require browser vendors to adopt it in order to be widely deployed, but browser vendors typically aren't likely to adopt WebID-specific modifications unless/until it is widely deployed. In order for WebID (or any really any Web-based authentication mechanism, for that matter) to be used securely, the requests, including the initial one [4] [5], need to happen over a secure connection (such as SSL). Once that connection is established, then the requests need to continue to happen over that security association if you're going to assume that identity remains correct. That is, you can only assume the WebID user is "logged in"/"authenticated" if/while every request originates over the HTTPS session that the WebID was provided over. If you're concerned about the desire to provide authn/authz via multiple certificates, then it should be possible with TLS secure renegotiation [6]. Because each subsequent renegotiation is secured/protected by the previous security establishment, a server could request multiple forms of authentication by sending a HelloRequest, and in the new handshake, requesting a different set of CAs in the CertificateRequest. Under such a scenario, a user can prove their possession of a WebID private key in one handshake and then, using that channel, prove their possession of a smart card-based private key in a subsequent renegotiation handshake. While such a scenario works at a TLS level, it will still likely require modifications to user agents to fully support, as it requires careful thought about the user experience, it has the benefit of accomplishing the same goal without being WebID-specific. Thanks, Ryan [1] http://www.w3.org/wiki/Foaf%2Bssl [2] https://www.blackhat.com/presentations/bh-usa-09/SOTIROV/BHUSA09-Sotirov-Att ackExtSSL-PAPER.pdf [3] http://en.wikipedia.org/wiki/Firesheep [4] http://www.thoughtcrime.org/software/sslstrip/ [5] http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security [6] http://tools.ietf.org/html/rfc5746 From: public-xg-webid-request@w3.org [mailto:public-xg-webid-request@w3.org] On Behalf Of peter williams Sent: Saturday, February 26, 2011 8:21 PM To: public-xg-webid@w3.org Subject: issue of initiating client auth for parallel SSL sessionids Because of the history of FOAF+SSL, we tend to see demos in which folks goto a site with http, and then use a login button - guarding a protected region of the site (or protected modes). I think we need something more general. As one browsers page index.html, should there by a file X referenced (call it ..crt), let the browser connect to its server using https (for that file GET, only). Presumaly, if browser knows the mime type of .crt, it populates the accept header with something suitable. What I want is that the validation agent only kick off when it receives a particular accept header ( induced by a containing page reference that forced population of that accept header on the resource retrieval attempt). Webid protocol would then run (and setup an SSL sessionid), but https would not be protecting the connections to other page elements. As one moves through a site, the SSL sessionid (due to webid protocol) can still guard access using an authorization logic. What this allows is both classical client authn (using smartcards, in DOD land) and webid client authn. Now, it easy for the site to maintain 2 distinct SSL sessions, 1 with CA's controlling the selection of certs (which hits the smartcard/eID) and 1 which does leverages webid. Those SSL connections on the same site supervised by the smartcard/eID SSL sessionid obviously leverage smartcard/eID's crypto, doing SSL connections that offer channel encryption using the *assured* crypto of the card (and applying CA-based certg chaining authn .merely to protect the channel's encryption SA). Those SSL connections on the same site supervised by the webid SSL sessionid are distinct, influencing "login" authentication and "web sessions" - driving an authorization engine (perhaps based on federated social network conceptions)
Received on Monday, 28 February 2011 11:44:27 UTC