W3C home > Mailing lists > Public > public-rdfa@w3.org > February 2009

Re: RDFa and Web Directions North 2009

From: Kjetil Kjernsmo <kjetil@kjernsmo.net>
Date: Fri, 13 Feb 2009 23:20:07 +0100
To: Ian Hickson <ian@hixie.ch>
Cc: public-rdfa@w3.org, RDFa mailing list <public-rdf-in-xhtml-tf@w3.org>
Message-id: <200902132320.12599.kjetil@kjernsmo.net>

On Friday 13 February 2009, Ian Hickson wrote:
> To be blunt, the existence of something
> using a technology is not an indication that the technology was a
> good solution. It can, however, lead to very useful experience: do
> any of the case studies listed above have frank evaluations of
> whether Semantic Web technologies have been successful? Most
> interesting would be reports from failed experiment -- the Semantic
> Web, like any technology, is not going to be right for everything; to
> what has it been found to _not_ be well suited?

Certainly. We have our days in intense agony too, but they have so far 
been on two fronts, the lack of integration with common web frameworks 
so that it is expensive to develop relatively simple Web solutions, and 
secondly performance of backend databases. 

We have not seen a lot of problems with it as a paradigm thus far, to 
the contrary, it is very liberating to work with a so flexible data 
model, when you're used to the straightjacket that is XML and 
relational databases.


> It should be noted that there are pretty simple solutions to both of
> the above, though. For example, for case 1 Amazon could just say
> "anything with class=price indicates the price for the item described
> by the nearest ancestor block with class=item" or some such,

Yeah, they could have done that the past decade, and they didn't. And 
even if they did, it wouldn't have helped a lot, as you'd need a 
programmer for every little simple task that involved getting to the 
data.

> or they 
> could expose the information in a much simpler way by having a
> "&format=json" mode for their pages that is purely machine-readable
> data. 

Same thing.

> Or they could do what they in fact do do, which is expose this 
> using a dedicated API:
>
>  
>  http://docs.amazonwebservices.com/AWSEcommerceService/2006-05-17/Api
>Reference/ItemLookupOperation.html

Those are the exact kind of things we are trying to avoid. They are much 
too costly to work with. They are the reason people go w00t when they 
see a two source mash-up, while I remain unimpressed.

> For example, merging MP3/ID3 data (dedicated vocabulary with
> dedicated format embedded in MP3 files) with an iTunes library data
> dump (dedicated vocabulary with XML format) would not be easier if
> they were both expressed as RDF using different vocabularies. If
> anything, frankly, the problem would get harder.

How would it be harder? Actually, we just did stuff like that. We had 
ID3, Ogg Vorbis comments, EXIF data (mostly useless), two different XML 
dumps of two different big media archives with hundreds of thousands of 
records. Pretty straightforwardly modelled, and a bit of small-o 
ontology, and there you go. The easy part of that job was to resolve 
vocabulary differences. Getting the data out was the hard part. 

Do you have experiences to the contrary? 

Kjetil
-- 
Kjetil Kjernsmo
Programmer / Astrophysicist / Ski-orienteer / Orienteer / Mountaineer
kjetil@kjernsmo.net
Homepage: http://www.kjetil.kjernsmo.net/     OpenPGP KeyID: 6A6A0BBC
Received on Friday, 13 February 2009 22:20:44 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 13 February 2009 22:20:45 GMT