W3C home > Mailing lists > Public > public-xg-webid@w3.org > January 2012

RE: neither FCNS nor FOAFSSL can read a new foaf card (hosted in Azure). RDFa validators at W3C and RDFachecker say its fine...

From: Peter Williams <home_pw@msn.com>
Date: Sun, 1 Jan 2012 09:20:10 -0800
Message-ID: <SNT143-W4EEF4B933CD08D2638E7392900@phx.gbl>
To: <mo.mcroberts@bbc.co.uk>
CC: <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>

 > > The case the triples MUST be returned in server-server communications with a #fragment on the wire of the GET is what I have learned is termed: "linked data nuances" for web server configuration. 
> This is pretty much entirely incorrect.
  Good. I was desperately hoping for this to be the case (and put an end to this). I wanted it to be case that, indeed, no subtely of de-referencing (at 100% rigor) requires something that a native windows http endpoint cannot do. I will take your word for it that its true. Jurgen mis-directed me to believe that the delivery of no-triples from the case was, from windows, some kind of "issue." It isn't. Rather, the W3C server on issuing a Get on the wire with #fragment simply exercised a path noone does, and that INDUCES confusion. W3C by hosting that particular script is perpetuating engineering that doesnt work (while being academically correct).  I now go back to my original belief that windows endpoints were doing what was needed, and in 100% conforming manner; no ifs no buts. We also know that jurgen's first case worked, when W3C server pulled the triples from windows data source, on a classical http endpoing. In his first case, the W3C data consumer server pulled the triples much as would any browser. I also know that the windows http endpoint sent an HTML stream in an HTTP rsponse, where the stream was exactly oper the RDFa example in the spec. Yes, I controlled the HTTP response headers. I also know that all three test sites then failed to accept my browser's SSL handshake, either when triple walking (bergi) or when ASK processing (with 100% rigorous dereferencing). The same sites accept my ssl handshake when the cert had other URIs embedded in the SAN, including URIs that identify a blogspot data source), and a windows blog service data source. But, neither of these server purely what the spec provides as its RDFa model. Though blogspot serves RDFa, the blogspot template adds many more triples. A private conversation identified the issue as one of classical ambiguity in the RDFa case, concerning the design of the specific instance in the spec. Im out of my depth in determining whether that is true or not. When the specific example of the RDfa card was amended to have an absolute subject, 2 sites could validate the SSL handshake when windows endpoint served the underlying document. But, there was a catch. I finally made a cert with 2 URIs: 1) pointing directly at windows, 2) pointing at a proxy URI for the windows data source (at uriburner).  On this condition, Henrys FOAFSSL site accepted my SSL handshake with such a cert. It clearly reported which of the 2 SAN URIs worked, and which one didnt.  FOAFSSL refused to accept the webid pointing directly at the windows data source of triples, and accepted the webid pointing at URIburner (that "proxies" the windows data source). Kingsley indicated that his proxy cleans up ambiguities in the naming/address features of webids and URIs embedded in documents, as it does for any data source that it "sponges" (crawls, and screen scrapes into "regularized" RDF). This accounts for the difference in behaviour. While that doesnt solve my problem (how to make windows endpoint using a bog-standard website built by the typical wizard serve the spec's RDFa stream in a manner that actually works), it did lead to something of intense value, as we moved on to consider the proxy URI, and the data source URI. In short, we learned to consider 2 worlds: the world of names, and the word of values.  In the naming world, we learned to use owl:sameAs to build reciprocal and to fashion an equivalence class. This enables those doing 100% rigorous deferencing to consider two documents equivalent, given assertions about the names. This in turn support a lattice, of equivalence classes, which supports rule based access controls used in military and Fortune 100 company's information systems. It faciliates the infamound secret folk cannot read top secret documents, and those with top secret clearance cannot write down to secret documents. The commercial equivalents for managing intellectual property are obvious, using commercial markings and lattice dominace rule. Military or Fortune 100, it addresses the classical world of mandatory access control enforcing compartmentalization. In the value world, we learned how the core cert:key relation is designed and the inner property of its inverse functional nature. Being IFP, a value-based equivalency of associated triples in documents hosted on different endpoints can be inferred by typical web crawlers. This enables crawlers without OWL to to infer equivalency of value between docuoments from distinct data sources, whose endpoitns deliver container documents. This all faciliates identity-based access control, using ACLs. It addresses the classical world of discretionary access control, enabling individual users to extend the security policy for a given resource.  We also learned how a sparql server can itself be a data source, when delivering an HTML stream of a sparsl result set. When a sparql query is embedded in a URI per the sparql protocol, this URI can go into the SAN URI. Kingsley gave an example of a ODS hosted RDFa source (of xml represented triples) being transformed by rdfa-translator (that rewored the xml triples) whose URI was used in a SPARQL FROM clause, which on execution produced a HTML format result-set , bearing (yes) a modulus. Thus he showed a sparql endpoint acting as a data source of webid profiles. Today, will play more with his sparql endpoint, since it also has a webid guard capability. Its operator's willingness for this sparql endpoint to be a data soure (for the transformed, FROM sourced original document) is guarded on two properties, already introduced above: i) it can enforce authn as a webid validator on the requestor invoking the sparql protocol run, and ii) said user once authenticated must be granted access from a webid-centric access control system guard, enforcing either RBAC or IBAC securiyt policies. Today, I intend to discover whether this works, and whether his server's authorization policy is enforcing RBAC (using the owl equivalencies) or IBAC (using the IFP equivalences). With that done, I will then focus on his application of "trusted" sparql servers , running advanced-queries (beyond webid's ASK) that perform iterative name/value resolution, much like DNS. This will enable me to see what works, in the under specified part of the webid spec: using such as foaf:knows relations at webid validation agents to 'qualify" whether the subject of an SSL handshake claiming a webid is valid.                  Since 		 	   		  
Received on Sunday, 1 January 2012 17:20:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Sunday, 1 January 2012 17:20:40 GMT