W3C home > Mailing lists > Public > public-rdf-in-xhtml-tf@w3.org > September 2007

Re: Fine-tuning CURIEs (reply #2 :-)

From: Mark Birbeck <mark.birbeck@formsPlayer.com>
Date: Fri, 14 Sep 2007 10:58:37 +0100
Message-ID: <a707f8300709140258j96a0d08m396e65e27bd9fe9f@mail.gmail.com>
To: "Ben Adida" <ben@adida.net>
Cc: "Shane McCarron" <shane@aptest.com>, public-rdf-in-xhtml-tf@w3.org

Ben/Shane,

You're kidding right?

;)

Why would I want to use GRDDL? Putting aside the problems of deploying
a GRDDL processor, I'd also have to be able to edit the source
document to add the profile--documents I may have no control over. I
want to do stuff client-side with nothing but RDFa and a set of
'action handlers'.

I think there is a misunderstanding about what it is that I want from
this, and I apologise because it seems to have got lost in the
discussion. It certainly has nothing to do with XHTML 2, Ben. I'm
hoping it will be clearer if I come at from a different
direction--perhaps explaining the use-case in more detail.

I have an RDFa parser that creates a set of triples from a document,
and it also stores a pointer to the DOM node that generated the
triple. I then have a set of action handlers that can act on matches
of triples.

It's pretty much the same idea as Operator, but the area I'm mostly
interested in is doing things to the node that I've stored, rather
than showing menus, etc. This is because what you end up with is
essentially a way to declaratively define behaviour and content in
your document, simply by adding metadata.

It means I can put an ISBN number in as a @resource value, and then
automatically query a service for more information about the book, and
then attach a tooltip with a picture of the book to the span that
contained the @resource value.

Ok. So I'm basically saying that the context I am working in is using
the metadata in the document as a 'hook' onto which to attach further
functionality or content, in such a way that it can be done
client-side, not server-side. That's easy to do for 'new' data that
you add, but often you want to be able to do the same kind of
thing--add functionality and content--to documents with 'old-style'
information. The example we've been using is the OpenID one, but there
are an enormous number of possible values that you might want to get
into your triple-store so that you can perform an 'action' on them.

For example, at the top of my blog, Google has provided his:

  <link
    rel="service.post"
    type="application/atom+xml"
    title="XForms and Internet Applications - Atom"
    href="http://www.blogger.com/feeds/8029070/posts/default"
  />

  <link
    rel="EditURI"
    type="application/rsd+xml"
    title="RSD"
    href="http://www.blogger.com/rsd.g?blogID=8029070"
  />

In my parser, because I allow non-prefixed values, I would get this:

  <>
    <http://www.w3.org/1999/xhtmlEditURI> <http://www.blogg...> .
  <>
    <http://www.w3.org/1999/xhtmlservice.post> <http://www.blogg...> .

Now, the question is not whether this is 'correct' in some kind of
'purist' sense; I don't see any point in getting into that discussion,
since I'm sure we all hope that in the future 'EditURI' will become
'rsd:EditURI' and 'service.post' will become 'atom:post' (or
whatever). So the question is *only* whether it does any harm for my
parser to generate these triples, whilst Ivan's parser--for
example--does not.

I don't see how it does cause a problem, and I've been quite
consistent over the last period in trying to ensure that nothing we
write--whether in the spec or test cases--prevents a parser generating
more triples than we express in the specification. (I generate triples
for <img>, <title>, and more, for example.)

So that's my use-case; how we solve it, I don't really mind. At the
moment I hear that everyone wants to solve the 'next' and 'prev'
problem by 'ignoring' all non-prefixed values, i.e., redefining
compact URIs. I'm disagreeing because that stops software like mine
and Operator from having the option of doing stuff with these 'legacy'
triples.

But that doesn't mean I'm saying everyone *must* generate 'legacy'
triples--only that I want to be able to, and that another way around
this should be found.

One approach might be to define what happens to the well-known XHTML
values in some kind of 'conceptual' pre-processing step--they get
converted to xh:next, for example--as we have discussed before, but
then to say that the behaviour for those values that are not
'pre-processed' is simply 'undefined'; processors may or may not
process them...'check with your implementation'...that kind of thing.
Server-side processors can do what they want, and I can do what I
want.

I don't think we should discuss this on the call today, because it
could easily eat up our whole time. I suggest instead that we try to
wrap up the other issues, and pursue this one a little more on the
list.

Regards,

Mark

On 14/09/2007, Ben Adida <ben@adida.net> wrote:
>
> Shane McCarron wrote:
> > Works for me, fwiw.  I don't think that having a default prefix (note
> > that these are NOT namespaces.... don't get confused) is terribly useful
> > in the XHTML case.  If you need to deal with things like openid.server,
> > supply an appropriate GRDDL profile and transformation engine to "do the
> > necessary".  As to how this gets addressed in the CURIE spec - that's
> > for the CURIE spec.  Not relevant here IMHO.
>
> Exactly, the use of DC.creator is specified to require a GRDDL profile,
> which will hopefully include hGRDDL so you can do an in-place
> transformation to RDFa. OpenID should do the same, of course.
>
> -Ben
>
>


-- 
  Mark Birbeck, formsPlayer

  mark.birbeck@formsPlayer.com | +44 (0) 20 7689 9232
  http://www.formsPlayer.com | http://internet-apps.blogspot.com

  standards. innovation.
Received on Friday, 14 September 2007 09:58:46 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:01:52 UTC