W3C home > Mailing lists > Public > public-xg-webid@w3.org > December 2011

RE: neither FCNS nor FOAFSSL can read a new foaf card (hosted in Azure). RDFa validators at W3C and RDFachecker say its fine...

From: Mo McRoberts <Mo.McRoberts@bbc.co.uk>
Date: Thu, 29 Dec 2011 09:04:30 -0000
Message-ID: <2D51F7E4325EF540A7E484A7A19216070CF62C@bbcxues15.national.core.bbc.co.uk>
To: "Peter Williams" <home_pw@msn.com>, <kidehen@openlinksw.com>, <public-xg-webid@w3.org>
hold on a second.

is somebody saying fragment identifiers SHOULD be included in a request somewhere?

HTTP/1.0 and HTTP/1.1 very explicitly say otherwise (and as far as I can tell, HTTPbis WG outputs haven't changed that), and last I checked nothing about linked data changes that? part of the point of linked data is that it doesn't require anything "special" 

AFAICT, a server is perfectly within its rights to return a 4xx response to request containing a fragment, and that includes a 400 (Bad Request), given that an unescaped '#' isn't permitted in a Request-URI.

if there's a spec somewhere which says otherwise, I'd love to know about it (not least so I can tweak my own servers), but the current httpbis-p1-messaging draft even goes as far as to say:

"Note: Fragments ([RFC3986], Section 3.5) are not part of the request-target and thus will not be transmitted in an HTTP request."

To the best of my knowledge this is a point of clarification, rather than a change in specification, it's just that some folk hadn't read the URI ABNF properly.

M.

-- 
Mo McRoberts - Technical Lead - The Space,
0141 422 6036 (Internal: 01-26036) - PGP key CEBCF03E,
Project Office: Room 7083, BBC Television Centre, London W12 7RJ  

> -----Original Message-----
> From: Peter Williams [mailto:home_pw@msn.com] 
> Sent: Wednesday, December 28, 2011 7:05 PM
> To: kidehen@openlinksw.com; public-xg-webid@w3.org
> Subject: RE: neither FCNS nor FOAFSSL can read a new foaf 
> card (hosted in Azure). RDFa validators at W3C and 
> RDFachecker say its fine...
> 
> 
>  
> 
> ok that was understandable.
>  
> IF our spec makes a webid validation agent a linked data 
> client, then it can assume the web server is acting as a 
> linked data server - and has the #fragment handling rule. 
> This means it does what Henrys server does, today.
>  
> Today, the spec does NOT say this (or explain the 
> ramifications, when using a conforming "mere" web server). 
> The RDFa example (with relative naming) may work in the 
> logical querying sense, but ONLY when served from a linked 
> data class web server (vs a "stupid" windows server doing 
> exactly what the "even stupider' actual RFC on HTTP spec 
> says/said, just like good engineers are trained to do).
>  
> This also means Im screwed, by using windows native stuff. 
> Its a lost cause, since I think the old rule is baked into 
> the kernel (just like the cert rooted'ness issue).
>  
> lets hope Im wrong, and at least windows can server a stupid 
> FOAF file in RDFa. If someone has a IIS7 rewrite rule that 
> allows a bog standard web site to receive URIs from the net 
> with fragments (and not be rejected early in the pipeline, 
> otherwise), share it. I looked for it, and failed to find it. 
> I have rdp access to the real windows server doing the 
> resource serving, so can fiddle IIS's pipeline (I believe) - 
> even though these endpoitns are supsoed to be "all in the cloud".
>  
>  
>  
>  
> ________________________________
> 
> Date: Wed, 28 Dec 2011 13:53:19 -0500
> From: kidehen@openlinksw.com
> To: public-xg-webid@w3.org
> Subject: Re: neither FCNS nor FOAFSSL can read a new foaf 
> card (hosted in Azure). RDFa validators at W3C and 
> RDFachecker say its fine...
> 
> 
> On 12/28/11 10:33 AM, Peter Williams wrote: 
> 
> 	Look at two cases in http://tinyurl.com/ck5j6bv 
> 	 
> 	FOR THIS COMMUNITY, is my site behaving correctly on 
> issuing a 400 error, for a GET that, on the wire, bears a fragment?
> 	
> 
> 
> If URL+#me crosses the wire (as it does in the Windows realm) 
> you are basically asking the server to provide access to a 
> resource at what is more than likely a non existent address. 
> Thus, expect a 404 re. Linked Data servers.  A Linked Data 
> Server is works on the premise of each unambiguously named 
> data object being associated with a descriptor resource at an 
> address. The association is managed by the Linked Data Server 
> since it oversees the handling or URIs (be they names or addresses). 
> 
> The thing about WebID is that the SAN must hold a 
> de-referencable Name. Not a de-referencable Address. As per 
> prior comments, there has to be > 1 level of indirection re. 
> this form of data access by reference. Without this level of 
> indirection we end up conflating Object Identity with Object 
> Representation when using HTTP URIs. 
> 
> To conclude, if you control what goes over the wire, then 
> understand that the Linked Data server is going to ultimately 
> perform Name/Address disambiguation. Thus, what you GET will 
> be treated as a URL to which a re-write is applied i.e., the 
> URL+#me will be treated as a valid address to which a 200 OK 
> would be expected, but a 404 will occur. 
> 
> Now #me across the wire is a legacy issue arising from what I 
> hear was a typo, so in reality, a Linked Data Server could 
> have an additional rule whereby the GET URL is now a proper 
> generic URI. Net effect (in our case) would be to no longer 
> use the FROM Clause in our SPARQL and just place the URI we 
> receive in the Subject slot of our query pattern. 
> 
> Conclusion: the fragment id over the wire handling in Windows 
> (solely) has lead to the confusion we have today. Now this 
> doesn't put Windows at fault since the whole issue seems to 
> have arisen from a spec typo!
> 
> Action Items: for use, we'll just add an additional re-write 
> rule for # URIs that cross the wire :-)
> 
> 
> Kingsley 
> 
> 
> 
> 	 
> 	Henry's site does not issue an error, for a GET on the 
> wire that bears a fragment.
> 	 
> 	Is it important to the relative naming resolution (of 
> sparql) that the site support GET requests with fragments (on 
> the wire)?
> 	 
> 	In the analogous openid world, it is NOT important when 
> those agents obtained (XRD) metadata. Validating agents MUST 
> normalize the URI, to remove the fragment from the wire-form 
> of the URI - thus emulating a browser.
> 	 
> 	By default, a bog standard wizard-driven website built 
> on windows does what I show. One can trivally amend the 
> document so doctypes have the right DTD and the HTML gets 
> RDFa markup. But, IS THIS ENOUGH for semantic web compliance?
> 	 
> 	Perhaps this is the core issue.
> 	 
> 	 
> 	 
> 	
> 	> Date: Wed, 28 Dec 2011 09:44:10 +0100
> 	> From: j.jakobitsch@semantic-web.at
> 	> To: home_pw@msn.com
> 	> CC: kidehen@openlinksw.com; public-xg-webid@w3.org
> 	> Subject: Re: neither FCNS nor FOAFSSL can read a new 
> foaf card (hosted in Azure). RDFa validators at W3C and 
> RDFachecker say its fine...
> 	> 
> 	> hi,
> 	> 
> 	> you should use http://www.w3.org/2007/08/pyRdfa/ to 
> check your rdfa.
> 	> 
> 	> paste 
> 	> 
> 	> 1. 
> http://b3d0c8f68475422784748b65f76b1642.cloudapp.net:8080/Abou
> trel.aspx => rdf
> 	> 2. 
> http://b3d0c8f68475422784748b65f76b1642.cloudapp.net:8080/Abou
> trel.aspx#me => empty rdf
> 	> 
> 	> in the "distill by URI" tab.
> 	> 
> 	> wkr http://www.turnguard.com/turnguard
> 	> 
> 	> ----- Original Message -----
> 	> From: "Peter Williams" <home_pw@msn.com> 
> <mailto:home_pw@msn.com> 
> 	> To: kidehen@openlinksw.com, public-xg-webid@w3.org
> 	> Sent: Wednesday, December 28, 2011 8:08:19 AM
> 	> Subject: RE: neither FCNS nor FOAFSSL can read a new 
> foaf card (hosted in Azure). RDFa validators at W3C and 
> RDFachecker say its fine...
> 	> 
> 	> 
> 	> 
> 	> Your tester fails against 
> http://b3d0c8f68475422784748b65f76b1642.cloudapp.net:8080/Abou
> trel.aspx#me 
> 	> 
> 	> The stream is literally the RDFa card from the spec 
> (with the modulus changed). 
> 	> 
> 	> (The endpoint will provide an error response, should 
> the GET bear a fragment in the URI request arg.) 
> 	> 
> 	> While the "snippet" of that spec card works fine in 
> blogger with all test sites, none of the 3 testing sites work 
> with what is actually given. This suggests the spec needs to 
> change its example. 
> 	> 
> 	> One notes how the Turtle example is absolutely 
> anchored (unlike the RDfa example). Advise that the spec have 
> identical triples (in different representations) 
> 	> 
> 	> 
> 	> > From: home_pw@msn.com 
> 	> > To: kidehen@openlinksw.com; public-xg-webid@w3.org 
> 	> > Date: Tue, 27 Dec 2011 21:37:48 -0800 
> 	> > Subject: RE: neither FCNS nor FOAFSSL can read a 
> new foaf card (hosted in Azure). RDFa validators at W3C and 
> RDFachecker say its fine... 
> 	> > 
> 	> > 
> 	> > I have spent a few hours getting really to grips 
> with both ODS and linkburner. 
> 	> > 
> 	> > Certain things are VERY straightforward. 
> 	> > 
> 	> > 
> 	> > 
> 	> > I logon with a password, and then map a cert to the 
> account (just like in windows). And, I can use the ODS 
> builtin CA, to mint a second cert with a variety of browser 
> plugins/keygentags. The net result is I can do https client 
> auhn to ODS, replacing the password challenge. Technically, a 
> cert-based login to ODS may even count as an act of webid 
> validation (rather than mere https client authn based on 
> fingerprint matching). 
> 	> > 
> 	> > 
> 	> > 
> 	> > Next, the account gives me a profile page. For any 
> n certs registered (with logon privileges, or not), the 
> profile publishes cert:key. Well done. From cert, infer 
> cert:key. For a third party cert, I can now reissue it (same 
> pubkey) adding the ODS profile URI. 
> 	> > 
> 	> > 
> 	> > 
> 	> > Then I got a real feel for sponging an html/rdfa 
> resource. The proxy prpofile/URI is essentially a new 
> profile, borrowing bits from the "data source" that it screen 
> scrapes. It has nothing to do with the accounts' own profile 
> page. The resultant profile has a proxy URI, and one can put 
> this in the SAN URI set of the cert whose pubkey was in the 
> the original data source (and now in the proxy profile). 
> 	> > 
> 	> > 
> 	> > 
> 	> > I altered by http://yorkporc2.blogspot.com/ 
> template/page. It now as a webid.cert relation/link. Its a 
> data URI, of type cert... with base64 blog content. Ideally, 
> sponger would now infer cert:key from that link (but not any 
> webid/foaf material), much like ODS profile inferred cert:key 
> from its store of mapped certs/accounts. It would sponge the 
> rest of the foaf card as normal. 
> 	> > 
> 	> > 
> 	> > 
> 	> > I was able to use the ODS webid validator to 
> validate against my cloud/azure hosted TTL card. 
> 	> > 
> 	> > 
> 	> > 
> 	> > I was able to run sparql queries on my yorkporc 
> HTML and TTL resources. I now understand (finally, after 2 
> years) why the sparql query for HTML gives the proxy name for 
> the subject (with cert:key) rather than the data sources URI. 
> Im really doing sparql against the proxy profile (not the 
> data source), despite the FROM clause in the sparql 
> identifying the data source. When one uses a non sponged 
> resouce (TTL), the sparql result is more insituitive as to 
> subject names. 
> 	> > 
> 	> > 
> 	> > 
> 	> > i went through all the product documentation. 
> 	> > 
> 	> > 
> 	> > 
> 	> > I learned that you are using the foaf:account as a 
> mapping mechanism (not merely a publication device). If one 
> uses facebook websso to authenticate, it maps to an ODS 
> account whose foaf profile publishes said foacebook account 
> name in a foaf:account property. 
> 	> > 
> 	> > 
> 	> > 
> 	> > I suspect (but could not confirm) that the 
> foaf:openid similarly enables an openid identifier presented 
> in openid websso to mapto a ODS profile, on login 
> authentication. O failed at any UI to get the system to act 
> as an openid relying party, talking to my 
> http://yorkporc.wordpress.com's openid server. 
> 	> > 
> 	> > 
> 	> > 
> 	> > The built in openid server (that uses a webid 
> challenge) is confusing. I dont know if the webids and 
> profiles that it vouches for are limited to those in an ODS 
> profile, in a proxy profile, or are for any other public 
> webid (for which a proxy profile is immediately created). 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> > 
> 	> 
> 	> -- 
> 	> | Jürgen Jakobitsch, 
> 	> | Software Developer
> 	> | Semantic Web Company GmbH
> 	> | Mariahilfer Straße 70 / Neubaugasse 1, Top 8
> 	> | A - 1070 Wien, Austria
> 	> | Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22
> 	> 
> 	> COMPANY INFORMATION
> 	> | http://www.semantic-web.at/
> 	> 
> 	> PERSONAL INFORMATION
> 	> | web : http://www.turnguard.com
> 	> | foaf : http://www.turnguard.com/turnguard
> 	> | skype : jakobitsch-punkt
> 	> 
> 	
> 
> 
> 
> -- 
> 
> Regards,
> 
> Kingsley Idehen	      
> Founder & CEO 
> OpenLink Software     
> Company Web: http://www.openlinksw.com
> Personal Weblog: http://www.openlinksw.com/blog/~kidehen
> Twitter/Identi.ca handle: @kidehen
> Google+ Profile: https://plus.google.com/112399767740508618350/about
> LinkedIn Profile: http://www.linkedin.com/in/kidehen
> 
> 
> 
> 
> 

http://www.bbc.co.uk/
This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated.
If you have received it in error, please delete it from your system.
Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately.
Please note that the BBC monitors e-mails sent or received.
Further communication will signify your consent to this.
					
Received on Thursday, 29 December 2011 09:05:19 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 29 December 2011 09:05:22 GMT