- From: Ivan Herman <ivan@w3.org>
- Date: Mon, 5 Sep 2011 10:32:32 +0200
- To: Jeni Tennison <jeni@jenitennison.com>
- Cc: public-rdfa-wg WG <public-rdfa-wg@w3.org>
Jeni, coming back to the original issue re @itemref. I think that a more-or-less-verbatim adoption of @itemref is an issue _both_ from an author and an implementation complexity. The implementation complexity is major. Not only it does add an additional level of issues to the core processing algorithm but, as Manu has said several times, it creates really big problems if you do not have a full DOM tree to work with, ie, you work with some stream-based implementation. I do not have an experience with that type of approach so he can give more details on that, but I do have a 'feeling' that we really have a problem here. But I also have an issue from the point of view of authors. From a microdata perspective, the way I understand it, the issue is that microdata is pretty much tree-oriented in its mental model. This is not a critique, just stating facts. @itemref is an attempt to overcome some of the limitations of this tree view, and it has therefore a valid justification. However, RDFa embraces graphs from the start, so there is no real understandable justification for its introduction: it adds to the mental model of RDFa for no really good reasons. Gregg has shown that either with the current setups (using @rev) or with a small modification (multiple subjects allowed on @about) the use cases we know about can be reasonably covered. My preference, if we want to go down the route, would be to allow multiple subjects on @about, although I do realize that this would mean an RDFa->microdata mapping more difficult. But, strictly from RDFa's point of view, I see that as a simpler and, from the author's perspective, perfectly clear extension of our current model which, as the microdata examples show, may serve a real purpose. B.t.w., all what I say is with the assumption that the usage of @itemref, or its copy thereof in RDFa, would _avoid_ the context dependency issue that Stéphane described. That context dependency would really make the author's mind spin, I am afraid, so we would have to introduce restrictions on what can be @itemref-d which is, by itself, and extra complication... Ivan On Sep 4, 2011, at 21:38 , Jeni Tennison wrote: > Hi Ivan, > > On 3 Sep 2011, at 08:27, Ivan Herman wrote: >> One of the remarks we have heard from opponents of RDFa is the the processing steps, at the core of RDFa, are fairly complicated. And I have to acknowledge that to be true: I have implemented RDFa, I have a microdata->RDF converter (though modified v.a.v. the rules of Hixie) and I even have a rudimentary JSON-LD->RDF converter, too. And it is true that the RDFa converter is the most complex one. Not really good. >> >> Of course, some of the complexity comes with the nature of the beast. RDFa has more general concerns than, eg, microdata (datatypes, emphasis on a graph and not on what is more of a tree, general management of URI-s, etc), and that has a price. Some of the reasons for the complexity might be our own stupidity and we may look at cutting back. However, I am afraid of adding new complexities to the processing steps right now and both the @itemref features as well as the more complete list management feature do just that. It honestly bothers me... > > > I absolutely understand your concern about RDFa's complexity. However, I think it's really important to distinguish between *implementer* complexity and *author* complexity. It's possible to have languages that are very complex to implement but are pretty easy to author (eg HTML). Generally, given that there are many times more authors than implementers, I think it's best to aim for reducing author complexity even when it means increasing implementer complexity. I think there's scope to do that within RDFa. > > Lists in RDFa are an example. There are good reasons for authors to want to use lists, where the ordering of items is not dependent on any property of the items: lists of authors of papers, to-do lists, favourites and so on. It is incredibly complex for authors to express these lists in RDFa at the moment. The changes that we are discussing increase the complexity of *implementing* RDFa, for sure, but they significantly reduce author complexity. > > FWIW, I think that in RDFa's case author and implementer complexity have become intertwined. In some ways, RDFa is like RDF/XML: there's a straightforward core way of expressing RDF, and then lots of shorthands that you can use to make the markup more concise. Those shorthands mean it's hard (for me I know, and, based on my experience trying to teach it, for others too), once RDFa gets over a certain complexity, to predict what a given piece of markup is going to produce without walking through the spec's processing instructions. This means as an author I have to basically do what an implementation does, but in my head, which is obviously harder for me than for a machine. > > I think that could probably be addressed by describing the straightforward core and encouraging publishers to use just that (being conservative in what they produce). For example, the core might consistently use about/typeof for new subjects, use elements to describe only one value, not have more than one child of a hanging rel and so on. > > From the implementer side, I don't think things are as bad. The processing algorithm is clearly spelled out so it's a simple matter of writing code to match. But there are aspects where RDFa re-describes the method of working something out when an implementer might be able to rely on existing mechanisms. For example, I think the language of an element should be apparent from the XML infoset or the lang property in the HTML DOM rather than being part of RDFa processing; that it isn't means it's not clear to me as an implementer whether I can use those built-in methods where available or if RDFa introduces some strange twist that means I can't. > > Anyway, I'm not denying that adding support for lists and an itemref equivalent is a trade-off, but I'd encourage you to consider the authoring side of the complexity argument, not just the steps that are added to the implementer's algorithm, as you decide how to proceed. > > Cheers, > > Jeni > -- > Jeni Tennison > http://www.jenitennison.com > ---- Ivan Herman, W3C Semantic Web Activity Lead Home: http://www.w3.org/People/Ivan/ mobile: +31-641044153 PGP Key: http://www.ivan-herman.net/pgpkey.html FOAF: http://www.ivan-herman.net/foaf.rdf
Received on Monday, 5 September 2011 08:32:49 UTC