Re: [foaf-protocols] WebID test suite

On 24 Jun 2011, at 22:45, Matt DeMoss wrote:

>> Its spec concepttually little or no different to using a directory object from ldap, looking for existance of a cert value in the directory attribute.
> 
>> that is why I distinguish - and we should distinguish more clearly in the spec - between a claimed WebID and a verified one. A WebID presented in the SAN fields of an X509 certificate is a claimed WebID.
> The Relying Party/IDP then fetches the canonical document for each WebID
> 
> I find the contrast with a directory object to be particularly
> interesting. It's usually the case that the CA is trusted to sign a DN
> that corresponds to a directory object in a directory we trust to have
> the correct attributes, but I would be interested to know more about
> other systems where (as with WebID) the trust relationship is a bit
> different.

I think Peter Williams' earlier claim on ldap, and experience from X500 suggests that the way we
are using URIs in X509 certificates is indeed a way they were intended to be used in
X500 directories. You were meant to dereference the ldap URI, and it could have also 
told you what the public key of the user was. That did not work because ldap URIs were 
not open, did not have global semantics based on URIs and so one could not do cross domain
lookups. 

Hence we ended up with a centralised number of CAs we are told to trust.

But if you think of what is happening there you will see its not that far off.
Imagine you get a CA signed certificate. Before verifying the CAs signature you should
consider all claims made in the certificate as just that: claims.

certauthor claims { some set of claims }

because the certificate is signed by a CA you trust and for which you have a rule such that
 
{ CA claims { ?a ?r ?b }  } => { ?a ?r ?b }

You end up believing the content of the cert.

now if you trust the ldap directory then you could have the rule that 

 { LDAPx claims { ?a ?r ?b} }} => { ?a ?r ?b }

So asking the question to the LDAP directory would amount to the same thing as asking the CA. In one case you get a cached copy of their statement with the proof of private key ownership, in the other case you fetch it off the internet.

  What is new here is that you don't know the web server, and you may not trust the web server indeed, but what you can trust the web server to do is to define its own terms correctly. So every web server is master of part of the URL names space for which it is given the right to define meanings in. So when you ask for my webid <http://bblfish.net/people/henry/card#me> you 

  1. ask bblfish.net about /people/henry/card
  2. that document defined the meaning of <http://bblfish.net/people/henry/card#me>

So what you have as a rule is

  if a host creates a term in its name space, then I believe that the meaning of the term is what the host says it is.

  So if the host defines the referent of <http://bblfish.net/people/henry/card#me> to be the thing that is uniquely identified by the description 

 <http://bblfish.net/people/henry/card#me> is cert:identity of [ 
        a rsa:RSAPublicKey;                 
        rsa:public_exponent "65537"^^cert:decimal ;                 
        rsa:modulus """E7 E9 2E B9 E0 86 92 CB 8E B9 07 22 22 B7 FB 86 34 91 89 A8 41 F1 
                      CD E1 77 C8 4F .... """^^cert:hex ;
       ] .


The that's something it has the right to do. What it says we must take at face value.  So if someone comes in an now proves that he fits that description, whilst claiming to be the referent of that URI, then we know he claims to be submitted to the definition places there. If he passes then he is indeed the referent of that URI. 

So it is very light weight, but that is the beauty of it. :-)


>  Do any of the SAML profiles do something you would consider
> comparable?


Not sure, I am not a SAML expert.

> 
> On Fri, Jun 24, 2011 at 4:31 PM, Henry Story <henry.story@bblfish.net> wrote:
>> 
>> On 24 Jun 2011, at 22:00, Kingsley Idehen wrote:
>> 
>> On 6/24/11 7:08 PM, Peter Williams wrote:
>> 
>> The defacto owl sameAs part is really interesting (and its the semweb part
>> of webid that most interests me, since its about the potential logic of
>> enforcement....)
>> 
>> are we saying that should n URIs be present in a cert and one of them
>> validates to the satisfaction of the verifying party, then this combination
>> of events is the statement: verifer says owl:sameAs x, where x is each
>> member of the set of SAN URIs in the cert, whether or not all x were
>> verified .
>> 
>> No.
>> 
>> When an IdP is presented with a Cert, it is going to have its own heuristic
>> for picking one WebID. Now, when there are several to choose from I would
>> expect that any choice results in a path to a Public Key -> WebID match.
>> Basically, inference such as owl:sameAs would occur within the realm of the
>> IdP that verifiers a WebID. Such inference cannot be based on the existence
>> of multiple URIs serving as WebIDs in SAN (or anywhere else).
>> 
>> Yes, that is why I distinguish - and we should distinguish more clearly in
>> the spec - between a claimed WebID and a verified one. A WebID presented in
>> the SAN fields of an X509 certificate is a claimed WebID.
>> The Relying Party/IDP then fetches the canonical document for each WebID.
>> These documents define the meaning of the WebID, of that URI, via a
>> definitive description tying the URI to knowledge of the private key of the
>> public key published in the certificate.
>> If the meaning of two or more URIs is tied to knowledge of the same public
>> key, then the relying agent has proven of each of these URIs that its
>> referent is the agent at the end of the https connection. Since that is one
>> agent, the two URIs refer to the same thing.
>> 
>> 
>> 
>> 
>> Thats quite a claim to make. An more restrcitied claim could be that
>> 
>> Yes, but I don't believe the spec infers that.
>> 
>> 
>> verifier says webid says owl:sameAs x, where x is each member of the set of
>> SAN URIs in the cert, whether or not all x were verified .
>> 
>> No, don't think that's the implication from spec or what one would expect to
>> happen.
>> 
>> Kingsley
>> 
>> 
>> ________________________________
>> From: henry.story@bblfish.net
>> Date: Fri, 24 Jun 2011 19:12:59 +0200
>> CC: public-xg-webid@w3.org; foaf-protocols@lists.foaf-project.org
>> To: home_pw@msn.com
>> Subject: Re: [foaf-protocols] WebID test suite
>> 
>> 
>> On 24 Jun 2011, at 18:45, Peter Williams wrote:
>> 
>> one thing the spec does not state is what is correct behaviour when a
>> consumer is prersented with a cert with multiple SAN URIs.
>> 
>> Well it does say something, even if perhaps not in the best way. It says:
>> in 3.1.4
>> "The Verification Agent must attempt to verify the public key information
>> associated with at least one of the claimed WebID URIs. The Verification
>> Agent may attempt to verify more than one claimed WebID URI."
>> then in 3.1.7
>> If the public key in the Identification Certificate matches one in the set
>> given by the profile document graph given above then the Verification
>> Agentknows that the Identification Agent is indeed identified by the WebID
>> URI.
>> I think the language that was going to be used for this was the language of
>> "Claimed WebIDs" - the SANs in the certificate, which each get verified. The
>> verified WebIDs are the ones the server can use to identify the user. They
>> are de-facto owl:sameAs each other.
>> 
>> If the test suite is run at site A (that cannot connect to a particular part
>> of the interent, becuase of proxy rules) presumably the test suite would
>> provide a different result to another site which can perform an act of
>> de-referencing.
>> 
>> That is ok, the server would state declaratively which WebIDs were claimed
>> and which were verified. It could state why it could not verify one of the
>> WebIDs. Network problems is a fact of life, less likely than strikes in
>> France - though those have been not been happening that often recently - or
>> congestions on the road.
>> 
>> 
>> This is a general issue. The degenrate case occurs for 1 SAN URI, obviously
>> - since siteA may not be able to connect to its agent. Thus, the issue of 1
>> or more multiple URIs is perhaps not the essential requirement at issue.
>> 
>> A variation of the topic occurs when a given site (B say) is using a caching
>> proxy, that returns a cached copy of a webid document (even though that
>> document may have been removed from the web). This is the topic of trusted
>> caches, upon which it seems that webid depends.
>> 
>> That is what the meta testing agent will be able to tell. He will be able to
>> put up WebID profiles log in somewhere, then login a few days later after
>> having removed the profile, or changed it and report on how the servers
>> respond.
>> 
>>  We would look silly if the average site grants access to a resource when
>> the identity document has been removed from the web,
>> 
>> It all depends on what the cache control statements on the WebID Profile
>> says. If they state they should last a year, then it is partly the fault of
>> the WebID profile publisher. (Could Web Servers offer buttons to their users
>> to update a cache?)
>> In any case it also depends on how serious the transaction is. In a serious
>> transaction it might be worth doing a quick check right before the
>> transaction, just in case.
>> 
>> yet cache continue to make consuemr believe that the identity is valid. At
>> the same time, given the comments from the US identity conference (that
>> pinging the internet during a de-referencing act is probably
>> unsunstainable), caches seem to be required (so consuming sites dont
>> generate observable network activity).
>> 
>> WebID works with caches. I don't think we could think without. Even X509
>> works with caches as is, since really an X509 signed cert is just a cache of
>> the one offered by the CA.
>> 
>> This all seems to be pointing at the issue that we have a trusted cache
>> issue at the heart of the webid proposal, and of course we all know that the
>> general web is supposed to be a (semi-trusted at best) cache.
>> 
>> Caches need to be taken into account. But I don't see this as a major
>> problem.
>> 
>> 
>> 
>> 
>> 
>>> From: henry.story@bblfish.net
>>> Date: Fri, 24 Jun 2011 13:37:26 +0200
>>> CC: foaf-protocols@lists.foaf-project.org
>>> To: public-xg-webid@w3.org
>>> Subject: WebID test suite
>>> 
>>> Hi,
>>> 
>>> In the spirit of test driven development, and in order to increate the
>>> rate at which we can evolve WebID, we need to develop test suites and
>>> reports based on those test suites.
>>> 
>>> I put up a wiki page describing where we are now, where we want to go.
>>> 
>>>  http://www.w3.org/2005/Incubator/webid/wiki/Test_Suite#
>>> 
>>> Please don't hesitate to improve it, and place your own library test end
>>> points up there - even if they
>>> are only human readable.
>>> 
>>> The next thing is to look at the EARL ontology I wrote and see if your
>>> library can also generate a test report, that folows the lead of the one I
>>> put up on bblfish.net. I expect a lot of detailed criticism, because I did
>>> just hack this together. As others implement their test reports, and as
>>> bergi builds his meta tests we will quickly notice our disagreements, and so
>>> be able to discuss them, and put the results into the spec.
>>> 
>>> Henry
>>> 
>>> Social Web Architect
>>>  http://bblfish.net/
>>> 
>>> 
>> _______________________________________________
>> foaf-protocols mailing list
>> foaf-protocols@lists.foaf-project.org
>> http://lists.foaf-project.org/mailman/listinfo/foaf-protocols
>> 
>> Social Web Architect
>> http://bblfish.net/
>> 
>> 
>> --
>> 
>> Regards,
>> 
>> Kingsley Idehen	
>> President & CEO
>> OpenLink Software
>> Web: http://www.openlinksw.com
>> Weblog: http://www.openlinksw.com/blog/~kidehen
>> Twitter/Identi.ca: kidehen
>> 
>> 
>> 
>> 
>> 
>> Social Web Architect
>> http://bblfish.net/
>> 

Social Web Architect
http://bblfish.net/

Received on Friday, 24 June 2011 21:31:03 UTC