RE: WebID-ISSUE-10 (bblfish): Hash URLs for Agents [ontologies]

Ive got two choices:
1. I can put an ldaps URI in a self-signed cert used for the SSl client auth, and have the webid protocol at the resource server pingback against the denoted activeDirectory entry (particularly easy these days since Clouds like "Azure" support stretched vlans (using VPNs) linking at the subnet level a tenant's pool of servers and the enterprise LAN hosting ldap server pool)
2. I can adopt the semantic web, https URIs, hash URLs for Agents, and foaf cards based initially on marking up good ol' home pages.
In the case of ldaps, its really easy to engineer a solution, this month. All my users (realtors) already have a directory entry. Storing an ldaps URI in the SAN URI field of their already existing cert is easy. A resource server on being presented with the ldaps claim from a subject, post ssl client auth, can easily use the explicity query in the claim to locate in the directory entry the stored cert; and then compare public keys from the ssl client cert with that from the directory entry. The users access control rules drive resource access decisions. This is a variation of what Lotus Notes did, in 1977. It just scale bigger. Patent risk, small; its so old. 
Or, I can try to do the same with the semantic web. There has to be a compelling and big picture rationale to make it more than 10% more complicated than the competition, summarized above.
Now, Im taking it on faith , for now, that the semantic web offers something more than the ldaps example. When I cut to the chase, folks are alluding to 3 things as reflected in the webid protocol design elements:
its necessary that there exist PPDs addressed by the http URI
its necessary that there is a relationship between the PPD and a graph within, addressed by the http URI and its anchor tag
its necesary in the design of the cert ontology (that co-designs the sparql query to be run by the resource server) that "logical subtleties" in PPDs, inner graphs, URIs stems  and URL anchors tags, are taken into consideration. 
What I cannot determine is if anyone is claiming that these "logical subtelties" are "security enforcing". 
Security enforcing is a technical term: to make the claim that X is security enforcing one must define what contribution the feature provides in either assurance or strength terms to the security enforcing function (SEF) one must name (or define).
Classical https with client auth (plus PKI for key authentication) provides the SEF of "peer entity authentication". The PKI component provides the supporting SEF of "[asymmetric] key distribution."
I see 2 choices.
1. The theory of "hash URLs for Agents" could make statements about the peer entity authentication SEF -  that the features above individually or collectively contribute to the security enforcing property. With this this claim made, individually or collectively the features must explain their contribution in terms of their assurance quality or the strength quality, or
2. we dont even WANT to do ]option 1 - which is all about SSL and  the world of "communications security". What folks may be wanting to do is claim is that there exists a world of "SEFs" (a term to be defined, in a semweb "assurance" vocab yet to be invented) that is unique to the semantic web. Is all about logical consistency and compleness of profile and facts, in a linking medium; and has nothing to do with the silly SSL handshake, really. The SSL handshake is to the webid protocol what HTTP is to the semantic web: a supporting medium dealing with bit transfers or proofs about key control.
If I comment, this project gets even more fascinating if option 2 is chosen. If folks were to first understand the formalism of the security engineering discipline for the likes of SSL and then, noting that the traditional scope of that discipline is far too narrow for our purposes here, Id guess that folks here have the wherewithall to expressed assurnace in terms of particular proofs of logical completeness and correctness or properties thereof - and these would be repeatable in actual protocol runs. The proof and the run would be one and the same thing... One doenst analyse a protocol run after the event, once recast into the terms terms of analytical formulae ( a la Birrel, Needham and lots of others); one just sends the formulae on the wire. The security of the webid protocol run is the logical completeness etc property, evidenced there and then; per run

> From:
> To:
> CC:
> Date: Sun, 30 Jan 2011 13:21:04 -0500
> Subject: Re: WebID-ISSUE-10 (bblfish): Hash URLs for Agents [ontologies]
> On Sun, 2011-01-30 at 01:53 +0100, Henry Story wrote:
> > On 30 Jan 2011, at 01:32, WebID Incubator Group Issue Tracker wrote:
> > 
> > > On 29 Jan 2011, at 21:04, Peter Williams wrote in the archived mail
> > >
> > > 
> > > What I really liked about the use of RDFa in the FOAF+SSL pre-incubator world was that the good ol' home page could easily be foaf card, and thus the home page URI is a webid stem. To the average punter (who will rarely understand the significance of #tag on the end), the home page URI is a webid.
> > 
> > For people who are just joining consider the graph here:
> >
> > 
> > You will see that the web page has a different URL to the person. That is because you can
> > ask the question of how many characters are on the Profile Page, but it won't make sense
> > to ask how many characters are on Joe, and even if it does, the answer will usually 
> > be different. So logically there are good reason to have different URLs for each.
> > If you give the same names to two things you can get a lot of confusion [ anybody a link to 
> > a comedy sketch that makes use of such a situation? ] And in fact in the semweb where
> > things are defined precisely you can prove that this is wrong.
> This is precisely true in context of machines, but, not necessarily
> false for humans. The human languages simply rely on the contextual
> usage in order to make sense of multiple things with the same name. We
> simply look for more clues than what's presented to us on the surface.
> And we are pretty good at making those distinctions by applying our own
> heuristics on the go given any information we can get a hold of (e.g.,
> context of the conversation, body language, previous experience)
> >From the human social point, all of homepage, webid, user account (and
> more) may very well be and I think that's
> perfectly fine. It doesn't help to evaluate the correctness of social
> languages (system A) using technical languages (system B).
> Hence, I naturally agree with the following:
> > Some points to notice:
> > - The end user, mom and pops, won't ever see a WebID. It will be hidden in a certificate. If they see anything it will be a home page.
> > - The WebID server logic will be mostly hidden in libraries
> as it leads us in the right direction.
> > So the only person this could be an issue with is the producer of the RDF.
> > If the RDF is generated automatically, then this won't be such a problem,
> > which is why RDF/XML and Turtle (please all learn turtle) have a long life
> > in front of them.
> > 
> > So the issue then is with the html developer. I think he can be taught. If he
> > does not do it right, it won't be a disaster immediately. One day it could
> > make his life awkward...
> > 
> > > The is no way in a million years I'll get even 2 realtors to ever use the foaf-generator sites and tools listed on the wiki. Getting them to add a paragraph of special html markup interspersed with normal paragraph quite feasible. Its a template, and we can give it to them.
> > 
> > We should improve the documentation as stated. I think WebID test suite will help.
> > Realtors seem unlikely to me to be building their own solution to this. My guess
> > is that they will buy some solution. That solution will help them do the right thing
> > easily.
> > 
> > > This RDFa argument for foaf cards mattered to me. It was like the "add sound file to mosaic browser" moment, succesfully dumbing down stuff for the mass of folk without prevent the technical standards doing their thing, just as the experts here define.
> > 
> > 
> > How did it matter to you other than in a theoretical way? As I pointed out
> > people can make mistakes, that won't break things immediately. But since we know
> > the best way to do things right, we might as well specify it. People will make their
> > mistakes whatever we do, but they won't be able to blame us.
> I wouldn't put it in terms of blaming, but I agree with your point.
> We have an obligation to specify things as accurately as possible. When
> there are mistakes, which is inevitable, we'd use other means to correct
> them or close the gap on what's intended using heuristics in libraries.
> All of this is based on the premise that 'the task of verifying the
> soundness or even completeness of potential WebIDs is outside the scope
> of where they are claimed'. I think this is also is inline with the idea
> where everyone is free to claim anything and at any amount in the RDF
> world.
> > People have managed to use the web and not understand the basics of how it works.
> > It just cost them over time. Imagine a news site that changes the URLs to its
> > articles. Doing that will break all incoming links, discouraging people to point
> > to them, and so reducing their long term value. There are many other examples. 
> > The W3C architecture group produces some fine documents whose authoritative power
> > lies not in the force of human law - nobody will stop anyone building their
> > broken web site with missing links all over the place - but in the value to the
> > user of doing things the right way
> > 
> > worth reading btw:
> > 
> > 
> > Henry
> > 
> > Social Web Architect
> >
> > 
> > 
> -Sarven

Received on Tuesday, 1 February 2011 05:48:44 UTC