W3C home > Mailing lists > Public > public-lod@w3.org > June 2008

Re: Where is the INTERLINKED multimedia data?

From: Peter Krantz <peter.krantz@gmail.com>
Date: Mon, 30 Jun 2008 15:29:21 +0200
Message-ID: <7b9ad66d0806300629k2dba7425odf5979933007bb0a@mail.gmail.com>
To: "Hausenblas, Michael" <michael.hausenblas@joanneum.at>
Cc: public-lod@w3.org, "Semantic Web" <semantic-web@w3.org>

On Mon, Jun 30, 2008 at 1:47 PM, Hausenblas, Michael
<michael.hausenblas@joanneum.at> wrote:
>
> Dear LODers,
>
> I'd like to ask for comments regarding a topic which IMHO has so far not
> been heavily addressed by our community: (fine-grained) interlinking of
> multimedia data. At [1] I've put together some initial thoughts. Please
> consider sharing your view and reply to this mail and/or add to the Wiki
> page.
>

Interesting idea. I have been thinking about a similar topic for a
while; how a common reference model for linear information (e.g. a
video clip or music) could make it a lot easier with regards to
accessibility.

Let's say you have a video clip of an interview. If you could
reference a specific part of it (with some sort of fragment identifier
or time interval) you could make statements about what was being said
in that part in pure text. So, for a specific section of the video
there are two representations (1) the video frames and (2) the
transcript. This would of course make it a lot easier for makers of
assistive devices to parse and present the correct information. As an
added bonus, it would be easier to reference parts of video clips in
other use cases (e.g. references in academic papers)

Someone has probably already implemented this.

Kind regards,

Peter Krantz
http://www.peterkrantz.com
Received on Monday, 30 June 2008 13:29:58 UTC

This archive was generated by hypermail 2.3.1 : Sunday, 31 March 2013 14:24:16 UTC