W3C home > Mailing lists > Public > public-media-annotation@w3.org > September 2010

Re: Using the Ontology for Media Resources in the Semantic Web

From: Yves Raimond <yves.raimond@bbc.co.uk>
Date: Tue, 14 Sep 2010 12:04:05 +0100
Message-ID: <AANLkTinc8p00gO8eq1GBz7ELi7TUspYGzLZRR2yu_=hj@mail.gmail.com>
To: RaphaŽl Troncy <raphael.troncy@eurecom.fr>
Cc: Media Annotation <public-media-annotation@w3.org>
Hello Raphael!

2010/9/9 RaphaŽl Troncy <raphael.troncy@eurecom.fr>:
> Dear Ivan, SW coordination group,
>
> The W3C Media Annotations WG has discussed your review [1] and in particular
> how to use the Ontology for Media Resources in the Semantic Web. Please,
> note that you will receive from Thierry a detailed answer to all your
> questions in a separate email. Please, note also that an RDF/OWL version of
> the ontology will be included in an Appendix in the next publication of this
> document.
>
> I'm contacting you, now, to get your input and feedback on one of our issue:
> how to associate a complex annotation (typically represented as an RDF
> graph) to a media resource?
>
> To be a bit more precise, the Ontology for media resources allows:
>
> †1/ Free text annotation using the ma:description property, e.g.:
>
> @prefix ma: <http://www.w3.org/ns/ma-ont>.
> <http://www.w3.org/2008/WebVideo/Fragments/media/fragf2f.ogv#t=12,21>
> †a ma:Video;
> †ma:description "Raphael is explaining what is a media fragment URI and it
> should be processed".
>
> †2/ (More) Complex semantic annotation using the ma:relation property, e.g.:
>
> @prefix ma: <http://www.w3.org/ns/ma-ont>.
> <http://www.w3.org/2008/WebVideo/Fragments/media/fragf2f.ogv#t=12,21>
> †a ma:Video;
> †ma:relation [
> † †ma:relID <http://www.example.com/annotation-1.ttl>;
> † †ma:relNature "Semantic Annotation";
> †]
> †ma:description "The audience is applauding".
>
> <http://www.example.com/annotation-1.ttl> being a Named Graph that would
> contain the statement:
> <http://dbpedia.org/resource/Audience> ex:humanActivity
> <http://dbpedia.org/resource/Applause>
>
> Here, we point to a named graph that would contain a potentially complex
> graph annotation. The Media Annotations WG would like to explore the
> possibility to have *embedded* semantic annotation directly into the media
> description and ask suggestions of how to do this?
> Could you provide us any input?
>

Sorry to jump on that particular example, but I don't think I've seen
it mentioned before. I can understand why such a modelling can be
appealing, but it is also potentially harmful. In your particular
example, if I loaded all the quad data in any triple store and asked
whether the audience is applauding (ASK WHERE
{<http://dbpedia.org/resource/Audience>
ex:humanActivity<http://dbpedia.org/resource/Applause>}), I would get
"true" as a result, which can get quite confusing! However, it is only
true in the context of that media fragment.  I am a bit concerned by
the fact that this relies too much on some Named Graphs semantics that
hasn't been properly defined yet. Moreover, as you mention, there is
no way of serialising all this information in a single document...

Did you consider event-based annotations (which are properly
understood and already deployed, e.g. by the BBC) instead? If it was
dropped in favor of a named graph approach, is there a document or an
email thread detailing the reasons why?

Kind regards,
Yves

> Tracker, this is ACTION-271 [2]
> Best regards.
>
> †RaphaŽl ... on behalf of the Media Annotations WG
>
> [1]
> http://lists.w3.org/Archives/Public/public-media-annotation/2010Jul/0017.html
> [2] http://www.w3.org/2008/WebVideo/Annotations/track/actions/271
>
> --
> RaphaŽl Troncy
> EURECOM, Multimedia Communications Department
> 2229, route des CrÍtes, 06560 Sophia Antipolis, France.
> e-mail: raphael.troncy@eurecom.fr & raphael.troncy@gmail.com
> Tel: +33 (0)4 - 9300 8242
> Fax: +33 (0)4 - 9000 8200
> Web: http://www.eurecom.fr/~troncy/
>
>
Received on Tuesday, 14 September 2010 11:04:38 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 14 September 2010 11:04:38 GMT