W3C home > Mailing lists > Public > public-lod@w3.org > June 2008

RE: Where is the INTERLINKED multimedia data?

From: Hausenblas, Michael <michael.hausenblas@joanneum.at>
Date: Mon, 30 Jun 2008 15:39:01 +0200
Message-ID: <768DACDC356ED04EA1F1130F97D29852017A1587@RZJC2EX.jr1.local>
To: "Peter Krantz" <peter.krantz@gmail.com>
Cc: <public-lod@w3.org>, "Semantic Web" <semantic-web@w3.org>


Thanks for your feedback - always good to hear from you!

>Someone has probably already implemented this.

Well, actually there are *many* ways of fragment identifications
available, see also our position paper form the W3C Video on the Web
workshop [1].

It would be great if you add your thoughts to the Wiki page - if time
allows ;)


[1] http://www.w3.org/2007/08/video/positions/Troncy.pdf 

PS: YouTube allows something like this as well, see
http://youtube.com/watch?v=lxQ1b8KR-Qo - however, only other YouTube
URIs are allowed as target :(

 Michael Hausenblas, MSc.
 Institute of Information Systems & Information Management
 JOANNEUM RESEARCH Forschungsgesellschaft mbH

>-----Original Message-----
>From: Peter Krantz [mailto:peter.krantz@gmail.com] 
>Sent: Monday, June 30, 2008 3:29 PM
>To: Hausenblas, Michael
>Cc: public-lod@w3.org; Semantic Web
>Subject: Re: Where is the INTERLINKED multimedia data?
>On Mon, Jun 30, 2008 at 1:47 PM, Hausenblas, Michael
><michael.hausenblas@joanneum.at> wrote:
>> Dear LODers,
>> I'd like to ask for comments regarding a topic which IMHO has so far
>> been heavily addressed by our community: (fine-grained) interlinking
>> multimedia data. At [1] I've put together some initial thoughts.
>> consider sharing your view and reply to this mail and/or add to the
>> page.
>Interesting idea. I have been thinking about a similar topic for a
>while; how a common reference model for linear information (e.g. a
>video clip or music) could make it a lot easier with regards to
>Let's say you have a video clip of an interview. If you could
>reference a specific part of it (with some sort of fragment identifier
>or time interval) you could make statements about what was being said
>in that part in pure text. So, for a specific section of the video
>there are two representations (1) the video frames and (2) the
>transcript. This would of course make it a lot easier for makers of
>assistive devices to parse and present the correct information. As an
>added bonus, it would be easier to reference parts of video clips in
>other use cases (e.g. references in academic papers)
>Someone has probably already implemented this.
>Kind regards,
>Peter Krantz
Received on Monday, 30 June 2008 13:43:26 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 7 January 2015 15:15:50 UTC