RE: Where is the INTERLINKED multimedia data?

Peter,

Thanks for your feedback - always good to hear from you!

>Someone has probably already implemented this.

Well, actually there are *many* ways of fragment identifications
available, see also our position paper form the W3C Video on the Web
workshop [1].

It would be great if you add your thoughts to the Wiki page - if time
allows ;)

Cheers,
	Michael

[1] http://www.w3.org/2007/08/video/positions/Troncy.pdf 

PS: YouTube allows something like this as well, see
http://youtube.com/watch?v=lxQ1b8KR-Qo - however, only other YouTube
URIs are allowed as target :(

----------------------------------------------------------
 Michael Hausenblas, MSc.
 Institute of Information Systems & Information Management
 JOANNEUM RESEARCH Forschungsgesellschaft mbH
  
 http://www.joanneum.at/iis/
----------------------------------------------------------
 

>-----Original Message-----
>From: Peter Krantz [mailto:peter.krantz@gmail.com] 
>Sent: Monday, June 30, 2008 3:29 PM
>To: Hausenblas, Michael
>Cc: public-lod@w3.org; Semantic Web
>Subject: Re: Where is the INTERLINKED multimedia data?
>
>On Mon, Jun 30, 2008 at 1:47 PM, Hausenblas, Michael
><michael.hausenblas@joanneum.at> wrote:
>>
>> Dear LODers,
>>
>> I'd like to ask for comments regarding a topic which IMHO has so far
not
>> been heavily addressed by our community: (fine-grained) interlinking
of
>> multimedia data. At [1] I've put together some initial thoughts.
Please
>> consider sharing your view and reply to this mail and/or add to the
Wiki
>> page.
>>
>
>Interesting idea. I have been thinking about a similar topic for a
>while; how a common reference model for linear information (e.g. a
>video clip or music) could make it a lot easier with regards to
>accessibility.
>
>Let's say you have a video clip of an interview. If you could
>reference a specific part of it (with some sort of fragment identifier
>or time interval) you could make statements about what was being said
>in that part in pure text. So, for a specific section of the video
>there are two representations (1) the video frames and (2) the
>transcript. This would of course make it a lot easier for makers of
>assistive devices to parse and present the correct information. As an
>added bonus, it would be easier to reference parts of video clips in
>other use cases (e.g. references in academic papers)
>
>Someone has probably already implemented this.
>
>Kind regards,
>
>Peter Krantz
>http://www.peterkrantz.com
>

Received on Monday, 30 June 2008 13:43:26 UTC