Re: how dirty can the HTML be, and still be RDFa?

On 25 Nov 2011, at 13:28, Peter Williams wrote:

> 
>  i updated the blogspot so the html element bears the namespaces.
>  
> But, logically, I want to follow someone's email post that had the namespaces tied to the element (to be cut and paste into a trivial blog post/page).
>  
>  
> The idea was that the RSS/ATom feed would then have a self-contained bit of HTML that can represent a trivial graph. Of course, being the web, the ATOM feed strips the foaf markup from the post.
>  
> what I also want is to be able to post 1000 graphs in a 1000 posts, and then export the site for hosting elsewhere, where the my little bit of HTML in each blog post is self contained expression of the RDFa represented graph.
>  
> Doesn sound a lot to ask does it? Bet it doenst work, though.

So why don't you start with something that works, instead of something you are betting won't work?

And also I am not sure we need to know on this list all the ways things won't work. I am interested to see your
statically published reliable WebID Profile and am looking forward to some service  of yours that works according 
to spec in such a way that it interacts with what we are doing here.  If I were you I'd hurry, because it does
not do the Windows toolchain you keep speaking about any favours, that it has been three years that you are on
this list and that you still don't have anything to show. Perhaps you are working for Oracle in disguise? ;-)

> From: home_pw@msn.com
> To: public-xg-webid@w3.org
> Date: Thu, 24 Nov 2011 17:06:27 -0800
> Subject: how dirty can the HTML be, and still be RDFa?
> 
> blogspot is free (like wordpress), and consumer grade. ost importanbtly to me, its part of the google family, and thusi works with a google IDP login (that is now mapped onto US realty logins, via Azure's openid/ws-fedp gateways).
>  
> With one edit to a simple template, blogger did allow me to change the html tag's header (to comply with RDFa) and add some namespaces. And, it did not strip out the marked up material in the blog post that followed, which came from the current spec. 
>  
> But, the result is nasty, when tested using hte W3C validator. Its not that nasty however, as the webid test suite's tool chain shows:
>  
> http://webid.fcns.eu/lookup.php?uri=http%3A%2F%2Fyorkporc.blogspot.com%2F2011%2F11%2Fnothing.html%23me&submit=+Lookup+&html=0
>  
> Not suprisingly, uriburner got something useful http://webid.fcns.eu/lookup.php?uri=http%3A%2F%2Fyorkporc.blogspot.com%2F2011%2F11%2Fnothing.html%23me&submit=+Lookup+&html=0
>  
> Now, the point is, regardless of the fact that it doesnt validate per the schema, 2 tools do seem to be happy. One (uriburner) is probably doing lots of guessing and intuiting data, and the other ill guess is simpler - and simply parsing the (dirty) HTML, per the standard - 
>  
> Now, I could go to my Microsoft CA and mint 1000 .p12 files whose certs have the relevant blogspot post URI, use the users password to encrypt the file, post off a download URI to the user's registred email address, and also machine post 1000 user profiles in RDFa to such each of 1000 such entrie s on that one blog site (creating 1000 "foaf cards" formally, each on their own URI, and each with hashtag of #, and the cert). But, is that kind of dirty HTML intended be acceptable and consumable by the typical webid validation agent?
>  
> Im hoping the answer is yes. I need it really simple (and what I did above satisfies that rule).
>  
> it really matters (to me) that I can use commodity web stuff, with sites powered by multi-vendor websso, works alongside Google Apps, hotmail, etc  At some point, the keys in the webid profile will have to cooperate with the more formal CA-managed certs that realtors maintain (so they can submit signed PDF documents to the US govt realty sites). But, that can wait.     

Social Web Architect
http://bblfish.net/

Received on Friday, 25 November 2011 12:37:59 UTC