W3C home > Mailing lists > Public > public-xg-webid@w3.org > December 2011

RE: neither FCNS nor FOAFSSL can read a new foaf card (hosted in Azure). RDFa validators at W3C and RDFachecker say its fine...

From: Peter Williams <home_pw@msn.com>
Date: Tue, 27 Dec 2011 21:54:50 -0800
Message-ID: <SNT143-W21A95FFD0AAF4A822D69E592AC0@phx.gbl>
To: <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>

 > What URIBurner (a Virtuoso instance with its Linked Data Deployment 
> middlware module enabled) does is generate a proxy/wrapper Linked Data 
> URI because of ambiguity its detects when dealing the the URIs used in 
> your Certs. SAN.
 Aha. When running sparql, sponging cleans up (formal ambiguities). It there was no ambiguity, the sparql result set would have the original URI as subject name. When forced to clean up, a new profile is screen scraped and given a wrapper URI. In such cases, the sparql is run against a local RDF store, populated with triples from only the proxy profile. On running the sparql queries against my TTL card, it had no ambiguities, and no proxy profile page was auto-mounted as a store for the query. In all cases (originally) I used the RDFa from the spec (which had relative names). And, normally, my SAN URIs bear the fragment (except when Im stressing other's implementations). When posted to blogger, validating sites could read those RDFa foaf cards (embedded in wider content). When posted to my own site, sites cannot work with the same streams. I will assume that somehow Blogger's template cleans up the suggested RDFa, making it palatable. Posted directly as araw  stream on a web site endpoint, the spec's suggested RDfa causes interoperability issues, even though it validates, and appears to be sensible RDFa.     		 	   		  
Received on Wednesday, 28 December 2011 05:55:17 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 28 December 2011 05:55:18 GMT