- From: Peter Williams <home_pw@msn.com>
- Date: Fri, 24 Jun 2011 13:15:57 -0700
- To: <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>
- Message-ID: <snt143-w64595AF17CBCE0442DB0D292520@phx.gbl>
I'm in agreement the spec says neither of the propositions. But, then, the spec says nothing on the topic of much value. it just state how to validate a cert, using a profile doc from the web. Its spec concepttually little or no different to using a directory object from ldap, looking for existance of a cert value in the directory attribute. If one prsented 3 user certs in the SSL message, and asked someone to test if any 1 of the n exist in one of 3 directory object, one has the same circumstance as webid, where 1 certs presents 3 SAN URIs. All we have done is change some bit formats around. Lets say now we are a web garden, where n https listeners are willing to project a common resource. In any given flow of multiple HTTP conversations, the browser may talk to any one of the n listeners, which has no knowledge of the decisions taken by the others (10 ms ago). Since we are in a RESTful world, lets assume there is no cookie, and no notion of sticky sessions acrosss nodes in a webgarden. Lets say that the node in a web garden, given the spec, pick a SAN URI of m presented, almost at random. And, lets guess that one of URI has a domain name that is blocked. Are we saying that on 1/3 runs, the browser will get an access denied, where on 2/3 runs it MAY get access. Are we saying that for the 2/3 runs, that is the cache for the 1/3 of the runs happens to be unexpired (but the real resource no longer exists, and its domain name is no longer published), that the access should be granted? If you read RFC1422, there is a very specific caching model for inbound certs. Having been validated at time x, LOCAL expiry applies to the copy in the cache. Local rules may decide that the local cache expires AFTER the cert itself expires, an the cert can CONTINUED to be consider (locally) valid, even if presented after the cert has expired. Date: Fri, 24 Jun 2011 21:00:36 +0100 From: kidehen@openlinksw.com To: public-xg-webid@w3.org Subject: Re: [foaf-protocols] WebID test suite On 6/24/11 7:08 PM, Peter Williams wrote: The defacto owl sameAs part is really interesting (and its the semweb part of webid that most interests me, since its about the potential logic of enforcement....) are we saying that should n URIs be present in a cert and one of them validates to the satisfaction of the verifying party, then this combination of events is the statement: verifer says owl:sameAs x, where x is each member of the set of SAN URIs in the cert, whether or not all x were verified . No. When an IdP is presented with a Cert, it is going to have its own heuristic for picking one WebID. Now, when there are several to choose from I would expect that any choice results in a path to a Public Key -> WebID match. Basically, inference such as owl:sameAs would occur within the realm of the IdP that verifiers a WebID. Such inference cannot be based on the existence of multiple URIs serving as WebIDs in SAN (or anywhere else). Thats quite a claim to make. An more restrcitied claim could be that Yes, but I don't believe the spec infers that. verifier says webid says owl:sameAs x, where x is each member of the set of SAN URIs in the cert, whether or not all x were verified . No, don't think that's the implication from spec or what one would expect to happen. Kingsley From: henry.story@bblfish.net Date: Fri, 24 Jun 2011 19:12:59 +0200 CC: public-xg-webid@w3.org; foaf-protocols@lists.foaf-project.org To: home_pw@msn.com Subject: Re: [foaf-protocols] WebID test suite On 24 Jun 2011, at 18:45, Peter Williams wrote: one thing the spec does not state is what is correct behaviour when a consumer is prersented with a cert with multiple SAN URIs. Well it does say something, even if perhaps not in the best way. It says: in 3.1.4 "The Verification Agent must attempt to verify the public key information associated with at least one of the claimed WebID URIs. The Verification Agent may attempt to verify more than one claimed WebID URI." then in 3.1.7 If the public key in the Identification Certificate matches one in the set given by the profile document graph given above then the Verification Agentknows that the Identification Agent is indeed identified by the WebID URI. I think the language that was going to be used for this was the language of "Claimed WebIDs" - the SANs in the certificate, which each get verified. The verified WebIDs are the ones the server can use to identify the user. They are de-facto owl:sameAs each other. If the test suite is run at site A (that cannot connect to a particular part of the interent, becuase of proxy rules) presumably the test suite would provide a different result to another site which can perform an act of de-referencing. That is ok, the server would state declaratively which WebIDs were claimed and which were verified. It could state why it could not verify one of the WebIDs. Network problems is a fact of life, less likely than strikes in France - though those have been not been happening that often recently - or congestions on the road. This is a general issue. The degenrate case occurs for 1 SAN URI, obviously - since siteA may not be able to connect to its agent. Thus, the issue of 1 or more multiple URIs is perhaps not the essential requirement at issue. A variation of the topic occurs when a given site (B say) is using a caching proxy, that returns a cached copy of a webid document (even though that document may have been removed from the web). This is the topic of trusted caches, upon which it seems that webid depends. That is what the meta testing agent will be able to tell. He will be able to put up WebID profiles log in somewhere, then login a few days later after having removed the profile, or changed it and report on how the servers respond. We would look silly if the average site grants access to a resource when the identity document has been removed from the web, It all depends on what the cache control statements on the WebID Profile says. If they state they should last a year, then it is partly the fault of the WebID profile publisher. (Could Web Servers offer buttons to their users to update a cache?) In any case it also depends on how serious the transaction is. In a serious transaction it might be worth doing a quick check right before the transaction, just in case. yet cache continue to make consuemr believe that the identity is valid. At the same time, given the comments from the US identity conference (that pinging the internet during a de-referencing act is probably unsunstainable), caches seem to be required (so consuming sites dont generate observable network activity). WebID works with caches. I don't think we could think without. Even X509 works with caches as is, since really an X509 signed cert is just a cache of the one offered by the CA. This all seems to be pointing at the issue that we have a trusted cache issue at the heart of the webid proposal, and of course we all know that the general web is supposed to be a (semi-trusted at best) cache. Caches need to be taken into account. But I don't see this as a major problem. > From: henry.story@bblfish.net > Date: Fri, 24 Jun 2011 13:37:26 +0200 > CC: foaf-protocols@lists.foaf-project.org > To: public-xg-webid@w3.org > Subject: WebID test suite > > Hi, > > In the spirit of test driven development, and in order to increate the rate at which we can evolve WebID, we need to develop test suites and reports based on those test suites. > > I put up a wiki page describing where we are now, where we want to go. > > http://www.w3.org/2005/Incubator/webid/wiki/Test_Suite# > > Please don't hesitate to improve it, and place your own library test end points up there - even if they > are only human readable. > > The next thing is to look at the EARL ontology I wrote and see if your library can also generate a test report, that folows the lead of the one I put up on bblfish.net. I expect a lot of detailed criticism, because I did just hack this together. As others implement their test reports, and as bergi builds his meta tests we will quickly notice our disagreements, and so be able to discuss them, and put the results into the spec. > > Henry > > Social Web Architect > http://bblfish.net/ > > _______________________________________________ foaf-protocols mailing list foaf-protocols@lists.foaf-project.org http://lists.foaf-project.org/mailman/listinfo/foaf-protocols Social Web Architect http://bblfish.net/ -- Regards, Kingsley Idehen President & CEO OpenLink Software Web: http://www.openlinksw.com Weblog: http://www.openlinksw.com/blog/~kidehen Twitter/Identi.ca: kidehen
Received on Friday, 24 June 2011 20:16:38 UTC