Re: Interoperability Framework and Vocabulary

Dear all,

Tobias Bürger wrote:
>> (...) Indeed, the framework should/could  *also* cover some other simple
>> features. The intention of my previous email is to encourage some
>> discussion on what else we should consider in order to improve
>> interoperability towards this direction.
>>>> Given the use cases that we have, I wonder how hard it is to 
>>>> identify such
>>>> equivalent relations.
>>> Oscar did that to some extent with the music ontologies. Is it what 
>>> you have in mind? 
>> It would be nice if Oscar could kindly share with us his opinions w.r.t.
>> the music use case. Authors of other use cases are very welcome join the
>> discussions too :-)
> I don't think that it should be the goal to come up with the least 
> common denominator of the vocabularies relevant for one use cases. 
> This could lead to the use Dublin Core as the vocabularies are very 
> heterogeneous. Thus it would really be good to hear Oscars opinion on 
> the music ontologies. Furthermore it is not very easy to come up with 
> sameAs or equalAs - Statements between different properties of 
> different vocabularies. The problem we would face here is the one of 
> schema integration that is researched since many years now. Especially 
> in the database community, later for XML Schemas and since a couple of 
> years also for ontologies (ontology alignment / mapping).
Well, regarding music ontology and mappings, first of all, the work we 
did (back in 2005?) was to map some music ontologies we found at that 
time (i.e Kanzaki OWL, Foafing OWL, and Musicbrainz RDFS) with the 
MPEG-7/OWL *huge ontology* [1].

Now, with regard to the Music Use Case and interoperability issues, (my) 
idea is to be quite practical. I mean, now there is the Music Ontology 
(MO) [2], inspired by the MusicBrainz metadata, which seems to cope with 
most of the issues presented in the music use case.
That said, the interoperability problem could be solved by "translating" 
the metadata from ID3 tags, OGG Vorbis tags, and even iTunes Library 
format to the MO.
Anyway, the main open issue here is how to deal with content-based 
metadata extracted automatically from the audio file (i.e bpm, key and 
mode, danceability, sections, "intensity", etc.). AFAIK, there is the 
Yves Raymond ontology [3], and the Foafing one [4] that could be useful 
in this sense.

I guess it's clear that my interest is not to do research on music 
ontology alignment/mappings (well, any help is very welcome here!), but 
to do something more, err... practical (useful?). That is, for instance 
once the ID3->MO, OGG->MO and iTMS->MO its done, to create a nice set of 
instances based on the MO, so people can start doing/testing funny 
SPARQL queries (such as [5]). Does this sound reasonable?

I hope that I've replied some of the answers from Raphael, Jeff an 
Tobias! :-)

Regards,

Oscar Celma.
[1] http://rhizomik.net/ontologies/mpeg7ontos/
[2] http://pingthesemanticweb.com/ontology/mo
[3] http://moustaki.xtr3m.org/c4dm/music.owl
[4] 
http://foafing-the-music.iua.upf.edu/music-ontology/foafing-ontology-0.3.n3
[5] http://pingthesemanticweb.com/ontology/mo/#sec-sparqlexample

Received on Thursday, 22 February 2007 19:17:07 UTC