W3C home > Mailing lists > Public > public-media-annotation@w3.org > October 2010

Re: Response to your LC Comment -2389 on Media Ontology spec

From: Thomas Steiner <tomac@google.com>
Date: Mon, 4 Oct 2010 16:59:51 +0200
Message-ID: <AANLkTikjTinO24OQ3gMWb2VL+kO+VakRWBdpn0KbXKhU@mail.gmail.com>
To: tmichel@w3.org
Cc: "public-media-annotation@w3.org" <public-media-annotation@w3.org>
Hi Thierry, hi Work Group members,

Thank you for your detailed response! Please find my comments below.

> 1) Subtitles
> Concerning external subtitles, using ma:relation is the correct approach as
> in your example. The identifier attribute contains the URL of the subtitle
> file, and the relation type qualifies it as a subtitle relation. The value
> should be a URI, but could also be a string. It is recommended to use a
> controlled vocabulary for the type of the relation.
This is perfectly what I was hoping for. Agreed.

> Embedding of subtitles is not a use case that we considered, however it is
> possible. The mechanism we use to specify timed metadata is to specify
> fragments identified by Media Fragment URIs [1] and then describe
> annotations of these fragments.
Same here. Agreed.

> - Link to external subtitle file using ma:fragment, with type subtitle and a
> Timed Text Markup Language (TTML) [2] or WebSRT [3] file as target.
Assuming ma:fragment is actually ma:relation. Agreed.

> - Subtitles can be embedded in a media file, in which case they can be
> described as a track media fragment using ma:fragment and Media Fragment
> URIs [1].
This sounds like a nice and flexible way. Agreed.

> - Subtitles could be embedded by using ma:title with a type qualifier for
> subtitle. A list of time media fragments is defined and each fragment is
> annotated using ma:title.
This, while possible, sounds like an overcharge of what ma:title was
designed for. Personally I'd not go this way.

> Although the last option is a way of embedding subtitles that is not a use
> case we considered. We expect that in most cases a dedicated format such as
> TTML or WebSRT will be used for the subtitles and referenced.
Agreed. TTML and WebSRT are both very good standards.

> 2) Semantic annotation
> As described above, time based annotations are a possible. Currently, two
> cases are covered by the spec:
> - use ma:description for a textual description of the media resource (or a
> fragment)
Possible, but not machine-readable/understandable by default.

> - use ma:relation to link to a RDF file or named graph containing the
> annotation for the media resource (or fragment)
Good solution. Especially when used with ma:fragment this has the
potential and degree of freedom of expression that I need. Agreed.

> There is currently no solution for embedding a set of triples into one of
> the ma:* properties. We understand that might be useful and have started
> discussion with the Semantic Web Coordination Group about a solution for
> this problem (see thread starting at [3]). The summary of the discussion is:
> Named graphs could be a solution to this issue, but there is no standard
> syntax for expressing them, to which our specification could refer. Such a
> syntax might find its way into RDF 2.0. As no other applicable solution
> emerged in the discussion, we decided to exclude the embedding of triples
> into ma:* elements until a standard syntax for named graphs is available.
Let's continue the discussion of [3] (the 2nd [3] ;-) below the first
[3]) at [3].


Thomas Steiner, Research Scientist, Google Inc.
http://blog.tomayac.com, http://twitter.com/tomayac
Received on Monday, 4 October 2010 15:00:45 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:17:39 UTC