W3C home > Mailing lists > Public > public-xg-webid@w3.org > November 2011

RE: how dirty can the HTML be, and still be RDFa?

From: Peter Williams <home_pw@msn.com>
Date: Fri, 25 Nov 2011 04:28:07 -0800
Message-ID: <SNT143-W431D7278CE9244E8332D3192CF0@phx.gbl>
To: "public-xg-webid@w3.org" <public-xg-webid@w3.org>

 i updated the blogspot so the html element bears the namespaces. But, logically, I want to follow someone's email post that had the namespaces tied to the element (to be cut and paste into a trivial blog post/page).  The idea was that the RSS/ATom feed would then have a self-contained bit of HTML that can represent a trivial graph. Of course, being the web, the ATOM feed strips the foaf markup from the post. what I also want is to be able to post 1000 graphs in a 1000 posts, and then export the site for hosting elsewhere, where the my little bit of HTML in each blog post is self contained expression of the RDFa represented graph.  Doesn sound a lot to ask does it? Bet it doenst work, though.From: home_pw@msn.com
To: public-xg-webid@w3.org
Date: Thu, 24 Nov 2011 17:06:27 -0800
Subject: how dirty can the HTML be, and still be RDFa?

blogspot is free (like wordpress), and consumer grade. ost importanbtly to me, its part of the google family, and thusi works with a google IDP login (that is now mapped onto US realty logins, via Azure's openid/ws-fedp gateways).
With one edit to a simple template, blogger did allow me to change the html tag's header (to comply with RDFa) and add some namespaces. And, it did not strip out the marked up material in the blog post that followed, which came from the current spec. 
But, the result is nasty, when tested using hte W3C validator. Its not that nasty however, as the webid test suite's tool chain shows:
Not suprisingly, uriburner got something useful http://webid.fcns.eu/lookup.php?uri=http%3A%2F%2Fyorkporc.blogspot.com%2F2011%2F11%2Fnothing.html%23me&submit=+Lookup+&html=0
Now, the point is, regardless of the fact that it doesnt validate per the schema, 2 tools do seem to be happy. One (uriburner) is probably doing lots of guessing and intuiting data, and the other ill guess is simpler - and simply parsing the (dirty) HTML, per the standard - 
Now, I could go to my Microsoft CA and mint 1000 .p12 files whose certs have the relevant blogspot post URI, use the users password to encrypt the file, post off a download URI to the user's registred email address, and also machine post 1000 user profiles in RDFa to such each of 1000 such entrie s on that one blog site (creating 1000 "foaf cards" formally, each on their own URI, and each with hashtag of #, and the cert). But, is that kind of dirty HTML intended be acceptable and consumable by the typical webid validation agent?
Im hoping the answer is yes. I need it really simple (and what I did above satisfies that rule).
it really matters (to me) that I can use commodity web stuff, with sites powered by multi-vendor websso, works alongside Google Apps, hotmail, etc  At some point, the keys in the webid profile will have to cooperate with the more formal CA-managed certs that realtors maintain (so they can submit signed PDF documents to the US govt realty sites). But, that can wait.
Received on Friday, 25 November 2011 12:28:39 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 19:39:48 UTC