- From: Peter Williams <home_pw@msn.com>
- Date: Wed, 28 Dec 2011 11:05:22 -0800
- To: <kidehen@openlinksw.com>, "public-xg-webid@w3.org" <public-xg-webid@w3.org>
- Message-ID: <SNT143-W1905EDC1C5A1D309F72EED92AC0@phx.gbl>
ok that was understandable. IF our spec makes a webid validation agent a linked data client, then it can assume the web server is acting as a linked data server - and has the #fragment handling rule. This means it does what Henrys server does, today. Today, the spec does NOT say this (or explain the ramifications, when using a conforming "mere" web server). The RDFa example (with relative naming) may work in the logical querying sense, but ONLY when served from a linked data class web server (vs a "stupid" windows server doing exactly what the "even stupider' actual RFC on HTTP spec says/said, just like good engineers are trained to do). This also means Im screwed, by using windows native stuff. Its a lost cause, since I think the old rule is baked into the kernel (just like the cert rooted'ness issue). lets hope Im wrong, and at least windows can server a stupid FOAF file in RDFa. If someone has a IIS7 rewrite rule that allows a bog standard web site to receive URIs from the net with fragments (and not be rejected early in the pipeline, otherwise), share it. I looked for it, and failed to find it. I have rdp access to the real windows server doing the resource serving, so can fiddle IIS's pipeline (I believe) - even though these endpoitns are supsoed to be "all in the cloud". Date: Wed, 28 Dec 2011 13:53:19 -0500
From: kidehen@openlinksw.com
To: public-xg-webid@w3.org
Subject: Re: neither FCNS nor FOAFSSL can read a new foaf card (hosted in Azure). RDFa validators at W3C and RDFachecker say its fine...
On 12/28/11 10:33 AM, Peter Williams wrote:
Look at two cases in http://tinyurl.com/ck5j6bv
FOR THIS COMMUNITY, is my site behaving correctly on issuing a
400 error, for a GET that, on the wire, bears a fragment?
If URL+#me crosses the wire (as it does in the Windows realm) you
are basically asking the server to provide access to a resource at
what is more than likely a non existent address. Thus, expect a 404
re. Linked Data servers. A Linked Data Server is works on the
premise of each unambiguously named data object being associated
with a descriptor resource at an address. The association is managed
by the Linked Data Server since it oversees the handling or URIs (be
they names or addresses).
The thing about WebID is that the SAN must hold a de-referencable
Name. Not a de-referencable Address. As per prior comments, there
has to be > 1 level of indirection re. this form of data access
by reference. Without this level of indirection we end up conflating
Object Identity with Object Representation when using HTTP URIs.
To conclude, if you control what goes over the wire, then understand
that the Linked Data server is going to ultimately perform
Name/Address disambiguation. Thus, what you GET will be treated as a
URL to which a re-write is applied i.e., the URL+#me will be treated
as a valid address to which a 200 OK would be expected, but a 404
will occur.
Now #me across the wire is a legacy issue arising from what I hear
was a typo, so in reality, a Linked Data Server could have an
additional rule whereby the GET URL is now a proper generic URI. Net
effect (in our case) would be to no longer use the FROM Clause in
our SPARQL and just place the URI we receive in the Subject slot of
our query pattern.
Conclusion: the fragment id over the wire handling in Windows
(solely) has lead to the confusion we have today. Now this doesn't
put Windows at fault since the whole issue seems to have arisen from
a spec typo!
Action Items: for use, we'll just add an additional re-write rule
for # URIs that cross the wire :-)
Kingsley
Henry's site does not issue an error, for a GET on the wire that
bears a fragment.
Is it important to the relative naming resolution (of sparql)
that the site support GET requests with fragments (on the wire)?
In the analogous openid world, it is NOT important when those
agents obtained (XRD) metadata. Validating agents MUST normalize
the URI, to remove the fragment from the wire-form of the URI -
thus emulating a browser.
By default, a bog standard wizard-driven website built on
windows does what I show. One can trivally amend the document so
doctypes have the right DTD and the HTML gets RDFa markup. But,
IS THIS ENOUGH for semantic web compliance?
Perhaps this is the core issue.
> Date: Wed, 28 Dec 2011 09:44:10 +0100
> From: j.jakobitsch@semantic-web.at
> To: home_pw@msn.com
> CC: kidehen@openlinksw.com; public-xg-webid@w3.org
> Subject: Re: neither FCNS nor FOAFSSL can read a new foaf
card (hosted in Azure). RDFa validators at W3C and RDFachecker
say its fine...
>
> hi,
>
> you should use http://www.w3.org/2007/08/pyRdfa/ to check
your rdfa.
>
> paste
>
> 1.
http://b3d0c8f68475422784748b65f76b1642.cloudapp.net:8080/Aboutrel.aspx
=> rdf
> 2.
http://b3d0c8f68475422784748b65f76b1642.cloudapp.net:8080/Aboutrel.aspx#me
=> empty rdf
>
> in the "distill by URI" tab.
>
> wkr http://www.turnguard.com/turnguard
>
> ----- Original Message -----
> From: "Peter Williams" <home_pw@msn.com>
> To: kidehen@openlinksw.com, public-xg-webid@w3.org
> Sent: Wednesday, December 28, 2011 8:08:19 AM
> Subject: RE: neither FCNS nor FOAFSSL can read a new foaf
card (hosted in Azure). RDFa validators at W3C and RDFachecker
say its fine...
>
>
>
> Your tester fails against
http://b3d0c8f68475422784748b65f76b1642.cloudapp.net:8080/Aboutrel.aspx#me
>
> The stream is literally the RDFa card from the spec (with
the modulus changed).
>
> (The endpoint will provide an error response, should the
GET bear a fragment in the URI request arg.)
>
> While the "snippet" of that spec card works fine in
blogger with all test sites, none of the 3 testing sites work
with what is actually given. This suggests the spec needs to
change its example.
>
> One notes how the Turtle example is absolutely anchored
(unlike the RDfa example). Advise that the spec have identical
triples (in different representations)
>
>
> > From: home_pw@msn.com
> > To: kidehen@openlinksw.com; public-xg-webid@w3.org
> > Date: Tue, 27 Dec 2011 21:37:48 -0800
> > Subject: RE: neither FCNS nor FOAFSSL can read a new
foaf card (hosted in Azure). RDFa validators at W3C and
RDFachecker say its fine...
> >
> >
> > I have spent a few hours getting really to grips
with both ODS and linkburner.
> >
> > Certain things are VERY straightforward.
> >
> >
> >
> > I logon with a password, and then map a cert to the
account (just like in windows). And, I can use the ODS builtin
CA, to mint a second cert with a variety of browser
plugins/keygentags. The net result is I can do https client
auhn to ODS, replacing the password challenge. Technically, a
cert-based login to ODS may even count as an act of webid
validation (rather than mere https client authn based on
fingerprint matching).
> >
> >
> >
> > Next, the account gives me a profile page. For any n
certs registered (with logon privileges, or not), the profile
publishes cert:key. Well done. From cert, infer cert:key. For
a third party cert, I can now reissue it (same pubkey) adding
the ODS profile URI.
> >
> >
> >
> > Then I got a real feel for sponging an html/rdfa
resource. The proxy prpofile/URI is essentially a new profile,
borrowing bits from the "data source" that it screen scrapes.
It has nothing to do with the accounts' own profile page. The
resultant profile has a proxy URI, and one can put this in the
SAN URI set of the cert whose pubkey was in the the original
data source (and now in the proxy profile).
> >
> >
> >
> > I altered by http://yorkporc2.blogspot.com/
template/page. It now as a webid.cert relation/link. Its a
data URI, of type cert... with base64 blog content. Ideally,
sponger would now infer cert:key from that link (but not any
webid/foaf material), much like ODS profile inferred cert:key
from its store of mapped certs/accounts. It would sponge the
rest of the foaf card as normal.
> >
> >
> >
> > I was able to use the ODS webid validator to
validate against my cloud/azure hosted TTL card.
> >
> >
> >
> > I was able to run sparql queries on my yorkporc HTML
and TTL resources. I now understand (finally, after 2 years)
why the sparql query for HTML gives the proxy name for the
subject (with cert:key) rather than the data sources URI. Im
really doing sparql against the proxy profile (not the data
source), despite the FROM clause in the sparql identifying the
data source. When one uses a non sponged resouce (TTL), the
sparql result is more insituitive as to subject names.
> >
> >
> >
> > i went through all the product documentation.
> >
> >
> >
> > I learned that you are using the foaf:account as a
mapping mechanism (not merely a publication device). If one
uses facebook websso to authenticate, it maps to an ODS
account whose foaf profile publishes said foacebook account
name in a foaf:account property.
> >
> >
> >
> > I suspect (but could not confirm) that the
foaf:openid similarly enables an openid identifier presented
in openid websso to mapto a ODS profile, on login
authentication. O failed at any UI to get the system to act as
an openid relying party, talking to my
http://yorkporc.wordpress.com's openid server.
> >
> >
> >
> > The built in openid server (that uses a webid
challenge) is confusing. I dont know if the webids and
profiles that it vouches for are limited to those in an ODS
profile, in a proxy profile, or are for any other public webid
(for which a proxy profile is immediately created).
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
> --
> | Jürgen Jakobitsch,
> | Software Developer
> | Semantic Web Company GmbH
> | Mariahilfer Straße 70 / Neubaugasse 1, Top 8
> | A - 1070 Wien, Austria
> | Mob +43 676 62 12 710 | Fax +43.1.402 12 35 - 22
>
> COMPANY INFORMATION
> | http://www.semantic-web.at/
>
> PERSONAL INFORMATION
> | web : http://www.turnguard.com
> | foaf : http://www.turnguard.com/turnguard
> | skype : jakobitsch-punkt
>
--
Regards,
Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Received on Wednesday, 28 December 2011 19:05:53 UTC