- From: Peter Williams <home_pw@msn.com>
- Date: Sat, 3 Dec 2011 14:14:02 -0800
- To: Henry Story <henry.story@bblfish.net>
- CC: "kidehen@openlinksw.com" <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>
- Message-ID: <snt0-eas27C05F01756F95DBF661F692B70@phx.gbl>
Saml2 folks believe that the session controls the web app. It's not an authoritative hint (that makes users lives better) its an entire architectural vision - that no one uses outside of us/uk academia. Imagine that a site has 2 login method: traditional local challenge, and websso. On first visit, websso users have to pass the local challenge to show control over the account name that some party now authoritatively binds to the websso subject name. This can be avoided the next visits, since the lookup can cue off the same websso subject name. Perhaps 5 websso names bind to the localid. Some folks believe that the websso token controls the web app session. Ie the previous model should be disallowed, being impure. If the token expires without being renewed the authn guard at the webapp must closes the http channel. The web app and the Idp are closely coupled that is. Others say that all that happened is that the same session/name/token was merely translated. It boots a local session at the webspp, managed no differently to a session minted by traditional challenge. The saml subject (read webid uri) once validated is a way of avoiding the local password challenge (and nothing more). It's a loosely coupled system. Openid made the error that folks would now go redesign apps to leverage the uri, tightly coupling webspp design to Openid/uri (vague) semantics. They don't. They just use it to session translate in practice, and map names (trivially) between autonomous systems. I suspect this group will make the same error as early Openid, assuming that webid auth is more than a mapping of subject names. Folks will probably get very religious on how "true" webid powered apps will now leverage the uri in ways beyond that linking interpretation. Though I don't object to such value add (;it look great research), let's wait and see and watch how those who just do webid validation (and linking names to traditional foaf:accounts ) will be somehow labelled as incomplete, lacking, having inappropriate architecture, etc. Ok off to study something new. Sent from my iPhone On Dec 3, 2011, at 6:50 AM, "Henry Story" <henry.story@bblfish.net> wrote: > > On 3 Dec 2011, at 07:09, Peter Williams wrote: > >> This sounds very similar to the NR key usage in the cert world, which got trapped between two semantic traditions. It also sounds little like the account linking concept that struggles formally in SAML1 and SAML2. (SAML2 denies it exists, but 99% of the world of users use SAML2 as if it was a SAML1 account linking flow. The latter is the key thing, I think. When we get a google openid claim delivered a SAML1 token from the Azure ACS gateway, we could not care less what the google name, other than its the same blob each time - which we map to the RP-side account. Its just "google says unique X" and Microsoft says Google is Google. >> >> In our world, the cert format itself declares the field to be a name; its a SAN. Thats the end of it, for the purposes of the validation protocol. >> >> We have to assume that that once the validator has performed the steps in the scheme, the name is linked to a foaf:account (which may be stored in a local triple db of the verifier, rather than the profile doc). As in the reality of the world of websso, social IDP names are mapped to local accounts on first visit, where the user must pass the old challenge to complete the binding. the local account -> SAN URI mapping is stored at the verifier. Typically, its an n to 1 mapping (for reasons of cryptopolitics). Access control is then done on the account using the local TCB. >> >> For webid purposes, its a SAN URI. In the "account linking interpretation" its only function is to (a) to locate a profile document, which must be a live document from the web, and (b) the SAN URI is the value used in the ASK, as is. If the ask is true, the verifier is acknowleging that the account account name has been implicitely used (and authentication guards have been satisfied). Local security authorities then take over. >> >> if the SAN URI is mailto:peter@foo.com, then the Hammar stack may be used, rather than webgets as in the http scheme, to confirm the profile document "exists on the web". >> >> I recognize, just like every other scheme, folks want a native authz and trust chaining logic to exist , and folks may "shudder" at the account linking interpretation. I dont mind folks having the formal authz that may need "more" correct naming that fits the logic assumptions (just like openid had its own mashup logics that made openid with with pingbacks,, etc), but also be aware of what folks do, on the ground with websso, regardless of what the standards architects design. After 10+ years of effort on websso in at least 4 different version of the same protocol (and browserid essentially #5), 99% of adopting RP sites vote with their feet - and just account link. > > I don't follow all of that history up there, but on linking I can say that the semantic web is designed around the concept of links. So it is no accident that you can do account linking into pre-web systems (e.g. LDAP), or across WebIDs, or even across protocols. So I am not sure who is going to shudder. Not us for sure. > > Henry > >> >> >> This is hopefully mat last message for a few weeks. Im off to doing something else, that needs to consume 101% of my attention (if Im to pass). >> >> From: henry.story@bblfish.net >> Date: Fri, 2 Dec 2011 22:41:04 +0100 >> CC: kidehen@openlinksw.com; public-xg-webid@w3.org >> To: home_pw@msn.com >> Subject: Re: default hashtags >> >> >> On 2 Dec 2011, at 22:20, Peter Williams wrote: >> >> Hmm. >> >> I dont know what to do, despite all the words :-). I dont know it FCNS test site is conforming or not, in its ACCEPT behaviour. >> >> It is probably conforming in its Accept behaviour. You as a user will probably have issues down the road if you >> don't stick to the spec. You may not see them immediately, it may take a lot of time in fact. Essentially you may >> be saying that you are a document, which may or may not have consequences. >> >> If you want to read about that there are huge threads on the web about it. I would very much rather we not get into that here >> >> >> I do know that anyone can mint a cert, and anyone can stick any crap in one (particularly the self-signed variety). I do know that anyone can get a 90day eval of windows 2008 R2 EE and run a Windows CA themselves, into whose certs the admin can stick any old crap... I know that installing openssl binaries is a matter of no-effort, as is running its little command line tool. Running makecert.exe is not much harder in the windows installs of a million developers using microsoft compilers. (Remembering the ever-moving path to its bin directory is the hardest part.) >> >> >> I do know that validation agents are responsible for formulating the query, and choosing which URIs of the n options **to try** to dereference. They have responsibility for filtering the URIs *suggested*, that is. >> >> Are we saying that a good/better validation agent would IGNORE certain URIs if they dont meet some rule? If so, the spec needs to say it, specific it, or give a big hint how important it is. Could we say that UNLESS you have an RDFS reasoner attached to engine evaluating the ASK query, that you ignore this or that of the URI set, if it does or does not terminate with / or a hashtag?? >> >> There are just consistency issues that can pop up, but they are outside the scope of our work here. The spec will give examples of good behaviour. >> >> >> That seems perfectly reasonable counsel to give implementors. >> >> The spec tells implementors of end user creating certs to put a # in there. >> >> >> Our goal is not to make the most complicated way ever designed of screen scraping 2 strings from a text file on a TCP/IP port. It has now to justify all the complexity because the side-effects are WORTH IT. And this means, the semantic web BENEFITS have to now shine (and without being a pain in the ass, reuqireing special web servers, or requiring a PhD in querying). Anyone who has done a a 2 week course in prolog should be able to handle this. >> >> Indeed that is why we have one simple ASK query, and also why we don't do screen scraping. >> >> >> For example, in my code, I rejected those URIs whose servers would not give me an OK, for a GET on the URI. I.e. if the site redirected me, I ignore the URI from the cert (and moved onto the next one, in ASN.1 order). This is the kind of advice the spec needs to give. It *can* give required handling rules. The rules need NOT to be idealisms or super-correctness as in a academic programming exam, but what facilites WEBID adoption based on *value*. >> >> The spec says choose the order you want. >> >> >> yes, I want now to know HOW (and when) to exploit equivalencies, so that I can compute the foaf card of the user from my local triple store AND the graph pointed to by the webid. Then, semantic web and foaf cards start to get real. Im doing something I cannot easily, otherwise. >> >> your isse was you said you had a the following urls >> http://yorkporc.blogspot.com/ >> http://yorkporc.blogspot.com/# >> http://yorkporc.blogspot.com/2011/11/2uri.html#me >> >> So whichever ones satisfy the query is the one the server can use to identify you, including all three if they all verify. >> >> Henry >> >> >> >> >> >> >> >> Date: Fri, 2 Dec 2011 15:29:47 -0500 >> From: kidehen@openlinksw.com >> To: public-xg-webid@w3.org >> Subject: Re: default hashtags >> >> On 12/2/11 1:50 PM, Peter Williams wrote: >> >> >> The test site is behaving as I want (though I dont know if its conforming, or going "beyond" the spec). its natural, and useful. It works well with the same blogsite also serving as an openid delegation point. >> >> To accomplish the following, all I did was what is "user natural". I took my RDFa from the spec, changed the mod value, changed to integer typing for the exponent, duplicated that ...so a second graph has localid of #, added an openid relation to the #-=identiied graph, and made a cert with 3 URIs, as shown below. >> >> If the following holds true to the spirit of this movement, Ill stop putting #tags in the URIs of my certs (assuming that the RDFa marks the graph with the default # tag). >> >> >> >> >> * Checking ownership of certificate (public key matches private key)... PASSED (Reason: GENEROUS) >> >> * Checking if certificate contains URIs in the subjectAltName field... PASSED >> >> * Found 3 URIs in the certificate (a maximum of 3 will be tested). >> >> * Checking URI 1 (http://yorkporc.blogspot.com/)... >> - Trying to fetch and process certificate(s) from webid profile... >> Testing if the modulus representation matches the one in the webid (found a modulus value)... >> >> Testing modulus... PASSED >> WebID=b94692148969aeb.......c165dfa03526b25 >> Cert =b94692148969aeb.......c165dfa03526b25 >> >> Match found, ignoring futher tests! >> >> * Authentication successful! >> >> >> Your certificate contains the following WebIDs: >> >> http://yorkporc.blogspot.com/ >> http://yorkporc.blogspot.com/# >> http://yorkporc.blogspot.com/2011/11/2uri.html#me >> >> The WebID URI used to claim your identity is: >> >> http://yorkporc.blogspot.com/ (your claim was SUCCESSFUL!) >> >> Your choice of "/" or "#" terminated URI re. WebID verification is important since we are using hyperlinks as object names/handles rather than object access addresses (URLs). Basically, good old indirection based data access by reference. This fidelity comes into play when you actually put WebID to use performing basic equivalence reasoning. This is why http: scheme hyperlinks are unintuitive object identifiers since they are more commonly used as resource access addresses. This is why a mailto: scheme URI + Webfinger within context of WebID works more intuitively, you don't have the burden of Name or Address disambiguation. Of course, you then end up with a different cost re. data access, but that's covered on the XRD front via hammer stack [1]. >> >> >> The SPARQL ASK is of the form: >> >> PREFIX :<http://www.w3.org/ns/auth/cert#> >> PREFIX xsd:<http://www.w3.org/2001/XMLSchema#> >> ASK { >> <ObjectID-Which-Maybe-Hash-or-Slash-terminated> :key [ >> :modulus "{modulus}"^^xsd:hexBinary; >> :exponent "{exponent}"^^xsd:integer; >> ] . >> } >> >> For now, I encourage you to stick with keeping the "#" in use while in user mode. >> >> Links: >> >> 1. http://hueniverse.com/2009/03/the-discovery-protocol-stack/ -- hammer stack. >> >> >> Kingsley >> >> Date: Fri, 2 Dec 2011 13:18:26 -0500 >> From: kidehen@openlinksw.com >> To: public-xg-webid@w3.org >> Subject: Re: default hashtags >> >> On 12/2/11 12:53 PM, Peter Williams wrote: >> My brain is such that I dont remember technical stuff for more than a few months, unless its refreshed. I dont remember the rules of hashtags, anymore. >> >> if I put http://yorkporc.blogspot.com/ in the SAN URI of the certs, will hat get treated asIf http://yorkporc.blogspot.com/# for the purposes of SPARQL ASK? >> >> Im hoping I can change my graph in my webid profile to stop using #me as the RDFa-coded graph's localid, but use # instead, so the above would all dereference >> >> Does it? >> >> If it doesnt happen by default, is there any statement I could put in my graph at http:/yorkporc.blogspot.com/#me today to that would induce the validation agent doing SPARQL ASK (when agumented with an RDFS reasoner, perhaps) to have view SAN URI of http://yorkporc.blogspot.com/ asIF http://yorkporc.blogspot.com/# (and/or http:/yorkporc.blogspot.com/2uri.html#me) >> >> >> >> Use: http:/yorkporc.blogspot.com/#me (which is what has to be in the cert. SAN) for SPARQL ASK query patterns, that URI identifies the entity that has a relation with the modulus and exponent parts of the "mirrored claims" held in the IdP hosted profile graph. >> >> BTW - you still have the issue of retrieving the profile graph. This is where the FROM clause comes into play re. some SPARQL engines. For instance, Virtuoso (our engine) will perform an HTTP GET subject to in-built cache invalidation rules. Of course, you can override using pragmas. >> >> -- >> >> Regards, >> >> Kingsley Idehen >> Founder & CEO >> OpenLink Software >> Company Web: http://www.openlinksw.com >> Personal Weblog: http://www.openlinksw.com/blog/~kidehen >> Twitter/Identi.ca handle: @kidehen >> Google+ Profile: https://plus.google.com/112399767740508618350/about >> LinkedIn Profile: http://www.linkedin.com/in/kidehen >> >> >> >> >> >> >> -- >> >> Regards, >> >> Kingsley Idehen >> Founder & CEO >> OpenLink Software >> Company Web: http://www.openlinksw.com >> Personal Weblog: http://www.openlinksw.com/blog/~kidehen >> Twitter/Identi.ca handle: @kidehen >> Google+ Profile: https://plus.google.com/112399767740508618350/about >> LinkedIn Profile: http://www.linkedin.com/in/kidehen >> >> >> >> >> >> >> Social Web Architect >> http://bblfish.net/ > > Social Web Architect > http://bblfish.net/ >
Received on Saturday, 3 December 2011 22:15:02 UTC