W3C home > Mailing lists > Public > public-xg-webid@w3.org > June 2011

Re: [foaf-protocols] WebID test suite

From: Matt DeMoss <demoss.matt@gmail.com>
Date: Tue, 28 Jun 2011 08:34:02 -0400
Message-ID: <BANLkTimqQouXj7j=EfmQorR5++_3H3krnQ@mail.gmail.com>
To: Henry Story <henry.story@bblfish.net>
Cc: Kingsley Idehen <kidehen@openlinksw.com>, public-xg-webid@w3.org
There is such a thing as a DSML Gateway:

https://www.opends.org/wiki/page/DefinitionDSMLGateway

Does it make sense to consider a similar gateway for these purposes?
There are enterprises with quite a lot of person data stored in
directories.

On Tue, Jun 28, 2011 at 6:28 AM, Henry Story <henry.story@bblfish.net> wrote:
>
> On 28 Jun 2011, at 11:18, Kingsley Idehen wrote:
>
> On 6/28/11 6:53 AM, Henry Story wrote:
>
> On 28 Jun 2011, at 03:10, Peter Williams wrote:
>
>  I think you keep ignoring the fact that from time eternal browsers have had
> ldap clients built in, using LDAP URLs.
>
> I don't ignore it. I even mentioned the ldap url as being a possibility for
> a WebId.
>
> Not a possibility, unless you are truly ignoring the fact that we already
> support ldap: scheme URIs (as SAN placed WebIDs) in our implementation of
> WebID.
>
> An actuality is a possibility. But one implementation does not a web make,
> and I doubt that ldap is the easiest or most direct way to create a web of
> data: you'd have to re-invent the semantic web to get that going. So that is
> the point here.
>
> As I keep on saying: URIs are sacrosanct. An IdP is the one to decide which
> schemes it can handle as part of its implementation of the WebID protocol.
>
> Yes, we don't restrict URIs in the spec.
>
>
> The issue is not ldap. its the fact that directories, whether ifs foaf
> cards, vcards, micro-formats, or any other projection of the directory
> record stuggle, becuase the security model was not a good social fit. Im
> convinced websso has got the the heart of that fit problem. And, thus, as
> you assert, ldap becomes an "attribute source", no different to sql or a
> foaf card.
>
> Yes, people don't want to open their ldap directories to anyone without
> protection. But they can only open them globally if they have something like
> WebID, and if they have a data format that allows for global linkability.
>
> Yes, and that's achievable and implemented by us already.
>
> Kingsley, in order for this to work at a global level, you need to have
> something like a GRDDL for LDAP formats, so that distributed databases can
> communicate information without ambiguity and without knowing each other
> ahead of time. I don't know of that having been specified yet by the W3C or
> anybody else.
>
> Ldap started off in the 1980 before the web, and was extended without ever
> fixing these problems, which of course are difficult to fix. The Web was
> designed as a hyperdocument platform from the begninning.
>
> Yes, so you can transform data to many representations once its clear that
> the base schema is really conceptual rather than syntactic. Basically, logic
> delivers the  conceptual schema.
>
> Yes, but unless you want to go to each ldap hoster and ask them what in his
> version of ldap the fields mean, then you can't really build a linked data
> web on ldap endpoints (ie: directly accessible ldap endpoints globally
> available).
> Here is an ldap entry taken from wikipedia
>
> dn: cn=John Doe,dc=example,dc=com
>  cn: John Doe
>  givenName: John
>  sn: Doe
>  telephoneNumber: +1 888 555 6789
>  telephoneNumber: +1 888 555 1232
>  mail: john@example.com
>  manager: cn=Barbara Doe,dc=example,dc=com
>  objectClass: inetOrgPerson
>  objectClass: organizationalPerson
>  objectClass: person
>  objectClass: top
>
> It is an attribute value pair system, without namespacing, and so is
> designed for client server interaction, not for linked data interaction. You
> need to tie those into a global namespacing. Not impossible, but you have a
> lot of work on your hands to get all the others to 1. understand why it is
> important (because they are living in closed worlds, and don't see what they
> are missing) and 2. Get an agreement world wide on how to do this.
>
>
>
> Now, what is intresting is that we keep expecting foaf cards (which are just
> serialized directory records, using a non-LDIF format) to find a fit,
> somehow addressing what failed in the ldap world.
>
> Foaf is based on RDF, which is designed for Linked Data (hyperdata)
> scenarios.
>
> Of course Ldap can participate too, but it would to need to give a clear
> mapping into the semweb, ie to give semantics so that users from one ldap
> system can communicate clearly - and without prior agreement on vocabulary -
> with another ldap system. But as I don't think this is done yet, I think we
> can skip ldap as a priority for the moment.
>
> The spec just has to be agnostic re. URI schemes. The support of any scheme
> re. WebID is an implementation matter for an IdP that supports the WebID
> protocol. That's really it. URIs are sacrosanct. Inherently agnostic.
>
> If you find something going amiss in the spec, please tell us.
>
>
> If you find some big ldap vendors who really want to join, then the W3C may
> be happy to help them semwebise the ldap system, and perhaps ldap urls will
> combine nicely and often with http and https urls. But my guess is that you
> will end up with huge resistance there in the ldap world: there will just be
> too many new things to explain to people. Unless it is shown to work clearly
> in the most natural platform - the web - they won't take it on.
>
> We'll be taking our implementation to them :-)
>
>
> And after all who cares whether it is ldap or http that is the transport
> protocol? Certainly not the business people who would finance this.
>
> See my earlier comment.
>
>
> Anyway what has this got to do with the WebID Test suite again? Please try
> to keep the posts on topic.
>
> Well you'll see that ldap: based WebIDs work with our implementation :-)
>
> As I said, one implementation is not sufficient. One needs more than one -
> at least three - and they have to be interoperable. It is great that you are
> trying these out ahead of time. But if we can concentrate here on getting
> the test cases for http and https working - and there was strong consensus
> for that - then we can grow to ldap after having solved the widely used
> cases.
> What I would like is to have us focus on the details of the EARL test cases
> now for http and https, to make sure the structure is right. If you see that
> we are closing the door technically inadvertently to ldap, please point that
> out.
> Henry
>
>
> Kingsley
>
> Henry
>
> This worries me.
>
>
> ________________________________
> From: henry.story@bblfish.net
> Date: Sun, 26 Jun 2011 18:43:24 +0200
> CC: demoss.matt@gmail.compublic-xg-webid@w3.org
> To: home_pw@msn.com
> Subject: Re: [foaf-protocols] WebID test suite
>
>
> On 26 Jun 2011, at 17:23, Peter Williams wrote:
>
>
> The X.509 standard worked worldwide - albeit mostly amongst universities. It
> was probably bigger than is the Shib world, even today. This seems to have
> been before Henry's time (he likes to tell the story that ldap/dap was never
> web scale, not realizing perhaps that the first directories "on the web"
> were http -> ldap -> dap gateways...).
>
> The point is the protocol was not made available directly on the web, in
> such a way that it could be interoperable directly as ldap. For example
> TCP/IP works at web scale, so does SMTP which is broken, but ldap is used a
> bit like SQL databases as a back end. There are logical reasons in the case
> of LDAP and of SQL for this. But I think you keep ignoring them: the URL.
>
> Today, of course, there are a few 10s of million AD installations, that we
> can expect to start connecting up quite shortly, now SAML->AD gateways are
> going mainstream. What folks refused to do (federate and publish
> directories), folks seem more willing to do when SAML claims project said
> directories to a limited network of consuming sites.
>
> Perhaps SAML has more of a chance, it uses a few web technologies: XML and
> namespaces for one. They even started working on a RESTful variant I heard.
> I am not a specialist of it.
>
>
> X.500 also had both simple and strong authentication, and the usual user,
> consumer (SP) and IDP model. Both could use signed operations between the
> "IDP" agent (the master agent for the record, in a multi-mastering world),
> and the consuming agent - some service,  today just like a SAML2 SP server,
> that wishes to obtain a signed confirmation that the user knows a password,
> compared remotely by the IDP in return for a signed confirmation
> response). The user presented the password + digested-password to the
> consumer (!) seeking access to some port, and duely the port guard would
> issue a compare operation against the IDP agent. Alternatively, the user
> presented a signed token to the consumer, which verified it in party by
> "comparing" the cert against the cert in the master record. Again, the IDP
> would respond to a compare request with a signed token confirming the result
> o comparing the values. Today, in windows its trivial to issue a signed SAML
> "request" to a web service on an https port, that is then compared
> similarly. blog formats have changed - but the model has not.
>
>
> yesterday, I had some fun. In a MSFT sample project, one has ones client
> code create a "self-signed SAML file", supported by a self-signed cert. One
> posts this to a azure serivce, which verifies the signature and returns
> a mac-signed json blob - which one then posts in the www-auth header to a
> rest service. The claims within have identity, authn and authz claims. Being
> done on the OAUTH endpoint, its a minor variant of the process to induce the
> service to redirect to a website, seeking user confirmation etc (in the
> usual OAUTH backwards-flow SSO flow), There, one can do
> webid...validation as a condition of release the authz confirmation.
>
> If we could get less abstract, reseachy, and less webby - and just fit in
> with the rest of the web - we'd have a lot more adoption.
>
> Well there are all these other communities to join where people are happy to
> do that.
> Nobody is saying we can't be interoperable, btw, I don't know why anyone
> whould thinks so.  But the intersting thing of WebID - as the name hints in
> a not too shy manner - is the Webiness. Now that does not stop you from
> storing your data in an sql database, ldap dp, or nosql datastore. We are
> not concerned about those here. We abstract them so as to be compatible with
> anything going on behind.
> Henry
>
>
>
>> Date: Fri, 24 Jun 2011 16:45:46 -0400
>> From: demoss.matt@gmail.com
>> To: henry.story@bblfish.net
>> CC: kidehen@openlinksw.compublic-xg-webid@w3.org
>> Subject: Re: [foaf-protocols] WebID test suite
>>
>> >Its spec concepttually little or no different to using a directory object
>> > from ldap, looking for existance of a cert value in the directory
>> > attribute..
>>
>> >that is why I distinguish - and we should distinguish more clearly in the
>> > spec - between a claimed WebID and a verified one. A WebID presented in the
>> > SAN fields of an X509 certificate is a claimed WebID.
>> The Relying Party/IDP then fetches the canonical document for each WebID
>>
>> I find the contrast with a directory object to be particularly
>> interesting. It's usually the case that the CA is trusted to sign a DN
>> that corresponds to a directory object in a directory we trust to have
>> the correct attributes, but I would be interested to know more about
>> other systems where (as with WebID) the trust relationship is a bit
>> different. Do any of the SAML profiles do something you would consider
>> comparable?
>>
>> On Fri, Jun 24, 2011 at 4:31 PM, Henry Story <henry.story@bblfish.net>
>> wrote:
>> >
>> > On 24 Jun 2011, at 22:00, Kingsley Idehen wrote:
>> >
>> > On 6/24/11 7:08 PM, Peter Williams wrote:
>> >
>> > The defacto owl sameAs part is really interesting (and its the semweb
>> > part
>> > of webid that most interests me, since its about the potential logic of
>> > enforcement....)
>> >
>> > are we saying that should n URIs be present in a cert and one of them
>> > validates to the satisfaction of the verifying party, then this
>> > combination
>> > of events is the statement: verifer says owl:sameAs x, where x is each
>> > member of the set of SAN URIs in the cert, whether or not all x were
>> > verified .
>> >
>> > No.
>> >
>> > When an IdP is presented with a Cert, it is going to have its own
>> > heuristic
>> > for picking one WebID. Now, when there are several to choose from I
>> > would
>> > expect that any choice results in a path to a Public Key -> WebID match.
>> > Basically, inference such as owl:sameAs would occur within the realm of
>> > the
>> > IdP that verifiers a WebID. Such inference cannot be based on the
>> > existence
>> > of multiple URIs serving as WebIDs in SAN (or anywhere else).
>> >
>> > Yes, that is why I distinguish - and we should distinguish more clearly
>> > in
>> > the spec - between a claimed WebID and a verified one. A WebID presented
>> > in
>> > the SAN fields of an X509 certificate is a claimed WebID.
>> > The Relying Party/IDP then fetches the canonical document for each
>> > WebID.
>> > These documents define the meaning of the WebID, of that URI, via a
>> > definitive description tying the URI to knowledge of the private key of
>> > the
>> > public key published in the certificate.
>> > If the meaning of two or more URIs is tied to knowledge of the same
>> > public
>> > key, then the relying agent has proven of each of these URIs that its
>> > referent is the agent at the end of the https connection. Since that is
>> > one
>> > agent, the two URIs refer to the same thing.
>> >
>> >
>> >
>> >
>> > Thats quite a claim to make. An more restrcitied claim could be that
>> >
>> > Yes, but I don't believe the spec infers that.
>> >
>> >
>> > verifier says webid says owl:sameAs x, where x is each member of the set
>> > of
>> > SAN URIs in the cert, whether or not all x were verified .
>> >
>> > No, don't think that's the implication from spec or what one would
>> > expect to
>> > happen.
>> >
>> > Kingsley
>> >
>> >
>> > ________________________________
>> > From: henry.story@bblfish.net
>> > Date: Fri, 24 Jun 2011 19:12:59 +0200
>> > CC: public-xg-webid@w3.orgfoaf-protocols@lists.foaf-project.org
>> > To: home_pw@msn.com
>> > Subject: Re: [foaf-protocols] WebID test suite
>> >
>> >
>> > On 24 Jun 2011, at 18:45, Peter Williams wrote:
>> >
>> > one thing the spec does not state is what is correct behaviour when a
>> > consumer is prersented with a cert with multiple SAN URIs.
>> >
>> > Well it does say something, even if perhaps not in the best way. It
>> > says:
>> > in 3.1.4
>> > "The Verification Agent must attempt to verify the public
>> > key information
>> > associated with at least one of the claimed WebID URIs. The Verification
>> > Agent may attempt to verify more than one claimed WebID URI."
>> > then in 3.1.7
>> > If the public key in the Identification Certificate matches one in the
>> > set
>> > given by the profile document graph given above then the Verification
>> > Agentknows that the Identification Agent is indeed identified by
>> > the WebID
>> > URI.
>> > I think the language that was going to be used for this was the language
>> > of
>> > "Claimed WebIDs" - the SANs in the certificate, which each get verified.
>> > The
>> > verified WebIDs are the ones the server can use to identify the user.
>> > They
>> > are de-facto owl:sameAs each other.
>> >
>> > If the test suite is run at site A (that cannot connect to a particular
>> > part
>> > of the interent, becuase of proxy rules) presumably the test suite would
>> > provide a different result to another site which can perform an act of
>> > de-referencing.
>> >
>> > That is ok, the server would state declaratively which WebIDs were
>> > claimed
>> > and which were verified. It could state why it could not verify one of
>> > the
>> > WebIDs. Network problems is a fact of life, less likely than strikes in
>> > France - though those have been not been happening that often recently -
>> > or
>> > congestions on the road.
>> >
>> >
>> > This is a general issue. The degenrate case occurs for 1 SAN URI,
>> > obviously
>> > - since siteA may not be able to connect to its agent. Thus, the issue
>> > of 1
>> > or more multiple URIs is perhaps not the essential requirement at issue.
>> >
>> > A variation of the topic occurs when a given site (B say) is using a
>> > caching
>> > proxy, that returns a cached copy of a webid document (even though that
>> > document may have been removed from the web). This is the topic of
>> > trusted
>> > caches, upon which it seems that webid depends.
>> >
>> > That is what the meta testing agent will be able to tell. He will be
>> > able to
>> > put up WebID profiles log in somewhere, then login a few days later
>> > after
>> > having removed the profile, or changed it and report on how the servers
>> > respond.
>> >
>> >  We would look silly if the average site grants access to a resource
>> > when
>> > the identity document has been removed from the web,
>> >
>> > It all depends on what the cache control statements on the WebID Profile
>> > says. If they state they should last a year, then it is partly the fault
>> > of
>> > the WebID profile publisher. (Could Web Servers offer buttons to their
>> > users
>> > to update a cache?)
>> > In any case it also depends on how serious the transaction is. In a
>> > serious
>> > transaction it might be worth doing a quick check right before the
>> > transaction, just in case.
>> >
>> > yet cache continue to make consuemr believe that the identity is valid.
>> > At
>> > the same time, given the comments from the US identity conference (that
>> > pinging the internet during a de-referencing act is probably
>> > unsunstainable), caches seem to be required (so consuming sites dont
>> > generate observable network activity).
>> >
>> > WebID works with caches. I don't think we could think without. Even X509
>> > works with caches as is, since really an X509 signed cert is just a
>> > cache of
>> > the one offered by the CA.
>> >
>> > This all seems to be pointing at the issue that we have a trusted cache
>> > issue at the heart of the webid proposal, and of course we all know that
>> > the
>> > general web is supposed to be a (semi-trusted at best) cache.
>> >
>> > Caches need to be taken into account. But I don't see this as a major
>> > problem.
>> >
>> >
>> >
>> >
>> >
>> >> From: henry.story@bblfish.net
>> >> Date: Fri, 24 Jun 2011 13:37:26 +0200
>> >> CC: foaf-protocols@lists.foaf-project.org
>> >> To: public-xg-webid@w3.org
>> >> Subject: WebID test suite
>> >>
>> >> Hi,
>> >>
>> >> In the spirit of test driven development, and in order to increate the
>> >> rate at which we can evolve WebID, we need to develop test suites and
>> >> reports based on those test suites.
>> >>
>> >> I put up a wiki page describing where we are now, where we want to go.
>> >>
>> >> http://www.w3.org/2005/Incubator/webid/wiki/Test_Suite#
>> >>
>> >> Please don't hesitate to improve it, and place your own library test
>> >> end
>> >> points up there - even if they
>> >> are only human readable.
>> >>
>> >> The next thing is to look at the EARL ontology I wrote and see if your
>> >> library can also generate a test report, that folows the lead of the
>> >> one I
>> >> put up on bblfish.net. I expect a lot of detailed criticism, because I
>> >> did
>> >> just hack this together. As others implement their test reports, and as
>> >> bergi builds his meta tests we will quickly notice our disagreements,
>> >> and so
>> >> be able to discuss them, and put the results into the spec.
>> >>
>> >> Henry
>> >>
>> >> Social Web Architect
>> >> http://bblfish.net/
>> >>
>> >>
>> > _______________________________________________
>> > foaf-protocols mailing list
>> > foaf-protocols@lists.foaf-project.org
>> > http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
>> >
>> > Social Web Architect
>> > http://bblfish.net/
>> >
>> >
>> > --
>> >
>> > Regards,
>> >
>> > Kingsley Idehen
>> > President & CEO
>> > OpenLink Software
>> > Web: http://www.openlinksw.com
>> > Weblog: http://www.openlinksw.com/blog/~kidehen
>> > Twitter/Identi.ca: kidehen
>> >
>> >
>> >
>> >
>> >
>> > Social Web Architect
>> > http://bblfish.net/
>> >
>>
>>
>
> Social Web Architect
> http://bblfish.net/
>
>
> Social Web Architect
> http://bblfish.net/
>
>
> --
>
> Regards,
>
> Kingsley Idehen	
> President & CEO
> OpenLink Software
> Web: http://www.openlinksw.com
> Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca: kidehen
>
>
>
>
>
> Social Web Architect
> http://bblfish.net/
>
Received on Tuesday, 28 June 2011 12:34:33 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:06:24 UTC