W3C home > Mailing lists > Public > public-xg-webid@w3.org > June 2011

RE: [foaf-protocols] WebID test suite

From: Peter Williams <home_pw@msn.com>
Date: Mon, 27 Jun 2011 18:10:19 -0700
Message-ID: <SNT143-w5438AB4EBBF40B8D07763292560@phx.gbl>
To: <henry.story@bblfish.net>
CC: "public-xg-webid@w3.org" <public-xg-webid@w3.org>

 I think you keep ignoring the fact that from time eternal browsers have had ldap clients built in, using LDAP URLs. The issue is not ldap. its the fact that directories, whether ifs foaf cards, vcards, micro-formats, or any other projection of the directory record stuggle, becuase the security model was not a good social fit. Im convinced websso has got the the heart of that fit problem. And, thus, as you assert, ldap becomes an "attribute source", no different to sql or a foaf card. Now, what is intresting is that we keep expecting foaf cards (which are just serialized directory records, using a non-LDIF format) to find a fit, somehow addressing what failed in the ldap world. This worries me.  From: henry.story@bblfish.net
Date: Sun, 26 Jun 2011 18:43:24 +0200
CC: demoss.matt@gmail.com; public-xg-webid@w3.org
To: home_pw@msn.com
Subject: Re: [foaf-protocols] WebID test suite

On 26 Jun 2011, at 17:23, Peter Williams wrote: 
The X.509 standard worked worldwide - albeit mostly amongst universities. It was probably bigger than is the Shib world, even today. This seems to have been before Henry's time (he likes to tell the story that ldap/dap was never web scale, not realizing perhaps that the first directories "on the web" were http -> ldap -> dap gateways...). 
The point is the protocol was not made available directly on the web, in such a way that it could be interoperable directly as ldap. For example TCP/IP works at web scale, so does SMTP which is broken, but ldap is used a bit like SQL databases as a back end. There are logical reasons in the case of LDAP and of SQL for this. But I think you keep ignoring them: the URL.
Today, of course, there are a few 10s of million AD installations, that we  can expect to start connecting up quite shortly, now SAML->AD gateways are going mainstream. What folks refused to do (federate and publish directories), folks seem more willing to do when SAML claims project said directories to a limited network of consuming sites.

Perhaps SAML has more of a chance, it uses a few web technologies: XML and namespaces for one. They even started working on a RESTful variant I heard. I am not a specialist of it.  
X.500 also had both simple and strong authentication, and the usual user, consumer (SP) and IDP model. Both could use signed operations between the "IDP" agent (the master agent for the record, in a multi-mastering world), and the consuming agent - some service,  today just like a SAML2 SP server, that wishes to obtain a signed confirmation that the user knows a password, compared remotely by the IDP in return for a signed confirmation response). The user presented the password + digested-password to the consumer (!) seeking access to some port, and duely the port guard would issue a compare operation against the IDP agent. Alternatively, the user presented a signed token to the consumer, which verified it in party by "comparing" the cert against the cert in the master record. Again, the IDP would respond to a compare request with a signed token confirming the result o comparing the values. Today, in windows its trivial to issue a signed SAML "request" to a web service on an https port, that is then compared similarly. blog formats have changed - but the model has not.
yesterday, I had some fun. In a MSFT sample project, one has ones client code create a "self-signed SAML file", supported by a self-signed cert. One posts this to a azure serivce, which verifies the signature and returns a mac-signed json blob - which one then posts in the www-auth header to a rest service. The claims within have identity, authn and authz claims. Being done on the OAUTH endpoint, its a minor variant of the process to induce the service to redirect to a website, seeking user confirmation etc (in the usual OAUTH backwards-flow SSO flow), There, one can do webid...validation as a condition of release the authz confirmation.
 If we could get less abstract, reseachy, and less webby - and just fit in with the rest of the web - we'd have a lot more adoption.
Well there are all these other communities to join where people are happy to do that. Nobody is saying we can't be interoperable, btw, I don't know why anyone whould thinks so.  But the intersting thing of WebID - as the name hints in a not too shy manner - is the Webiness. Now that does not stop you from storing your data in an sql database, ldap dp, or nosql datastore. We are not concerned about those here. We abstract them so as to be compatible with anything going on behind.
  > Date: Fri, 24 Jun 2011 16:45:46 -0400
> From: demoss.matt@gmail.com
> To: henry.story@bblfish.net
> CC: kidehen@openlinksw.com; public-xg-webid@w3.org
> Subject: Re: [foaf-protocols] WebID test suite
> >Its spec concepttually little or no different to using a directory object from ldap, looking for existance of a cert value in the directory attribute..
> >that is why I distinguish - and we should distinguish more clearly in the spec - between a claimed WebID and a verified one. A WebID presented in the SAN fields of an X509 certificate is a claimed WebID.
> The Relying Party/IDP then fetches the canonical document for each WebID
> I find the contrast with a directory object to be particularly
> interesting. It's usually the case that the CA is trusted to sign a DN
> that corresponds to a directory object in a directory we trust to have
> the correct attributes, but I would be interested to know more about
> other systems where (as with WebID) the trust relationship is a bit
> different. Do any of the SAML profiles do something you would consider
> comparable?
> On Fri, Jun 24, 2011 at 4:31 PM, Henry Story <henry.story@bblfish.net> wrote:
> >
> > On 24 Jun 2011, at 22:00, Kingsley Idehen wrote:
> >
> > On 6/24/11 7:08 PM, Peter Williams wrote:
> >
> > The defacto owl sameAs part is really interesting (and its the semweb part
> > of webid that most interests me, since its about the potential logic of
> > enforcement....)
> >
> > are we saying that should n URIs be present in a cert and one of them
> > validates to the satisfaction of the verifying party, then this combination
> > of events is the statement: verifer says owl:sameAs x, where x is each
> > member of the set of SAN URIs in the cert, whether or not all x were
> > verified .
> >
> > No.
> >
> > When an IdP is presented with a Cert, it is going to have its own heuristic
> > for picking one WebID. Now, when there are several to choose from I would
> > expect that any choice results in a path to a Public Key -> WebID match.
> > Basically, inference such as owl:sameAs would occur within the realm of the
> > IdP that verifiers a WebID. Such inference cannot be based on the existence
> > of multiple URIs serving as WebIDs in SAN (or anywhere else).
> >
> > Yes, that is why I distinguish - and we should distinguish more clearly in
> > the spec - between a claimed WebID and a verified one. A WebID presented in
> > the SAN fields of an X509 certificate is a claimed WebID.
> > The Relying Party/IDP then fetches the canonical document for each WebID.
> > These documents define the meaning of the WebID, of that URI, via a
> > definitive description tying the URI to knowledge of the private key of the
> > public key published in the certificate.
> > If the meaning of two or more URIs is tied to knowledge of the same public
> > key, then the relying agent has proven of each of these URIs that its
> > referent is the agent at the end of the https connection. Since that is one
> > agent, the two URIs refer to the same thing.
> >
> >
> >
> >
> > Thats quite a claim to make. An more restrcitied claim could be that
> >
> > Yes, but I don't believe the spec infers that.
> >
> >
> > verifier says webid says owl:sameAs x, where x is each member of the set of
> > SAN URIs in the cert, whether or not all x were verified .
> >
> > No, don't think that's the implication from spec or what one would expect to
> > happen.
> >
> > Kingsley
> >
> >
> > ________________________________
> > From: henry.story@bblfish.net
> > Date: Fri, 24 Jun 2011 19:12:59 +0200
> > CC: public-xg-webid@w3.org; foaf-protocols@lists.foaf-project.org
> > To: home_pw@msn.com
> > Subject: Re: [foaf-protocols] WebID test suite
> >
> >
> > On 24 Jun 2011, at 18:45, Peter Williams wrote:
> >
> > one thing the spec does not state is what is correct behaviour when a
> > consumer is prersented with a cert with multiple SAN URIs.
> >
> > Well it does say something, even if perhaps not in the best way. It says:
> > in 3.1.4
> > "The Verification Agent must attempt to verify the public key information
> > associated with at least one of the claimed WebID URIs. The Verification
> > Agent may attempt to verify more than one claimed WebID URI."
> > then in 3.1.7
> > If the public key in the Identification Certificate matches one in the set
> > given by the profile document graph given above then the Verification
> > Agentknows that the Identification Agent is indeed identified by the WebID
> > URI.
> > I think the language that was going to be used for this was the language of
> > "Claimed WebIDs" - the SANs in the certificate, which each get verified. The
> > verified WebIDs are the ones the server can use to identify the user. They
> > are de-facto owl:sameAs each other.
> >
> > If the test suite is run at site A (that cannot connect to a particular part
> > of the interent, becuase of proxy rules) presumably the test suite would
> > provide a different result to another site which can perform an act of
> > de-referencing.
> >
> > That is ok, the server would state declaratively which WebIDs were claimed
> > and which were verified. It could state why it could not verify one of the
> > WebIDs. Network problems is a fact of life, less likely than strikes in
> > France - though those have been not been happening that often recently - or
> > congestions on the road.
> >
> >
> > This is a general issue. The degenrate case occurs for 1 SAN URI, obviously
> > - since siteA may not be able to connect to its agent. Thus, the issue of 1
> > or more multiple URIs is perhaps not the essential requirement at issue.
> >
> > A variation of the topic occurs when a given site (B say) is using a caching
> > proxy, that returns a cached copy of a webid document (even though that
> > document may have been removed from the web). This is the topic of trusted
> > caches, upon which it seems that webid depends.
> >
> > That is what the meta testing agent will be able to tell. He will be able to
> > put up WebID profiles log in somewhere, then login a few days later after
> > having removed the profile, or changed it and report on how the servers
> > respond.
> >
> >  We would look silly if the average site grants access to a resource when
> > the identity document has been removed from the web,
> >
> > It all depends on what the cache control statements on the WebID Profile
> > says. If they state they should last a year, then it is partly the fault of
> > the WebID profile publisher. (Could Web Servers offer buttons to their users
> > to update a cache?)
> > In any case it also depends on how serious the transaction is. In a serious
> > transaction it might be worth doing a quick check right before the
> > transaction, just in case.
> >
> > yet cache continue to make consuemr believe that the identity is valid. At
> > the same time, given the comments from the US identity conference (that
> > pinging the internet during a de-referencing act is probably
> > unsunstainable), caches seem to be required (so consuming sites dont
> > generate observable network activity).
> >
> > WebID works with caches. I don't think we could think without. Even X509
> > works with caches as is, since really an X509 signed cert is just a cache of
> > the one offered by the CA.
> >
> > This all seems to be pointing at the issue that we have a trusted cache
> > issue at the heart of the webid proposal, and of course we all know that the
> > general web is supposed to be a (semi-trusted at best) cache.
> >
> > Caches need to be taken into account. But I don't see this as a major
> > problem.
> >
> >
> >
> >
> >
> >> From: henry.story@bblfish.net
> >> Date: Fri, 24 Jun 2011 13:37:26 +0200
> >> CC: foaf-protocols@lists.foaf-project.org
> >> To: public-xg-webid@w3.org
> >> Subject: WebID test suite
> >>
> >> Hi,
> >>
> >> In the spirit of test driven development, and in order to increate the
> >> rate at which we can evolve WebID, we need to develop test suites and
> >> reports based on those test suites.
> >>
> >> I put up a wiki page describing where we are now, where we want to go.
> >>
> >> http://www.w3.org/2005/Incubator/webid/wiki/Test_Suite#
> >>
> >> Please don't hesitate to improve it, and place your own library test end
> >> points up there - even if they
> >> are only human readable.
> >>
> >> The next thing is to look at the EARL ontology I wrote and see if your
> >> library can also generate a test report, that folows the lead of the one I
> >> put up on bblfish.net. I expect a lot of detailed criticism, because I did
> >> just hack this together. As others implement their test reports, and as
> >> bergi builds his meta tests we will quickly notice our disagreements, and so
> >> be able to discuss them, and put the results into the spec.
> >>
> >> Henry
> >>
> >> Social Web Architect
> >> http://bblfish.net/
> >>
> >>
> > _______________________________________________
> > foaf-protocols mailing list
> > foaf-protocols@lists.foaf-project.org
> > http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
> >
> > Social Web Architect
> > http://bblfish.net/
> >
> >
> > --
> >
> > Regards,
> >
> > Kingsley Idehen	
> > President & CEO
> > OpenLink Software
> > Web: http://www.openlinksw.com
> > Weblog: http://www.openlinksw.com/blog/~kidehen
> > Twitter/Identi.ca: kidehen
> >
> >
> >
> >
> >
> > Social Web Architect
> > http://bblfish.net/
> >

Social Web Architect

Received on Tuesday, 28 June 2011 01:10:48 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:45 UTC