RE: Media annotations Working Group telephone conference 2008-11-18

Hi Joakim, all, 


> I believe that it would be nice if we could help application 
> developers to find out what type of metadata they could 
> obtain; if it is an image, video clip or "simple" text. 

I agree.

> Dublin Core has a classification scheme for that - DCMIType 
> (http://dublincore.org/documents/dcmi-type-vocabulary/), e.g. 
> Collection, Image, InteractiveResource, MovingImage, 
> Software, Sound, Text etc. Or we could use MIME types...

We have to be careful not to mix things, I think there three different types of metadata involved here:

- collections vs single content
- media type (image, sound, text, ...)
- file format, encoding (MIME type)

Best regards,
Werner

> -----Original Message-----
> From: public-media-annotation-request@w3.org 
> [mailto:public-media-annotation-request@w3.org] On Behalf Of 
> Silvia Pfeiffer
> Sent: den 18 november 2008 07:17
> To: Ruben Tous
> Cc: Felix Sasaki; public-media-annotation@w3.org
> Subject: Re: Media annotations Working Group telephone 
> conference 2008-11-18
> 
> 
> Hi Ruben, all,
> 
> I found that document very interesting.
> 
> I have a further concern that you may want to consider when looking at
> hierarchical description schemes or flat ones.
> 
> I believe the decision depends on what viewpoint you have 
> towards annotations.
> 
> Both for XMP and DC, the descriptions were written in flat structures
> because they have to be able to be embedded into a data stream and
> easily extractable. Name-value fields are much easier to handle than
> hierarchical structures and are thus easier to expose as an interface
> towards something or somebody else. They essentially say "I am this
> resource and this is what I know about myself".
> 
> The other specifications seem to be built as description schemes for
> collections of media resources. Since such descriptions necessarily
> stay out of th resources themselves, and since they tend to live in
> databases, hierarchical relationships are fairly common and a good way
> to avoid data duplication.
> 
> So, the main question that I take out of this is: do we want to create
> an ontology that can be multiplexed into a video stream (e.g. as a
> header file in ID3 and vorbiscomment fashion, or as time-aligned text
> in the data section like TimedText or subtitles)? or do we want to
> create an ontology that can describe video stream collections?
> 
> I am mostly interested in the earlier one, but I am not sure where the
> group is heading.
> 
> Regards,
> Silvia.
> 
> On Tue, Nov 18, 2008 at 8:54 AM, Ruben Tous <rtous@ac.upc.edu> wrote:
> >
> > Hi all,
> >
> > as promised in the last telco, and with the help of Victor 
> and Jaime, I have
> > created a page for the multi-level description review:
> >
> > 
> http://www.w3.org/2008/WebVideo/Annotations/wiki/MultilevelDes
> criptionReview
> >
> > Best regards,
> >
> > Ruben
> >
> >
> >
> > ----- Original Message ----- From: "Felix Sasaki" <fsasaki@w3.org>
> > To: <public-media-annotation@w3.org>
> > Sent: Monday, November 17, 2008 1:41 PM
> > Subject: Media annotations Working Group telephone 
> conference 2008-11-18
> >
> >
> >>
> >> Hi all,
> >>
> >> just as a reminder, we will have a call at 18. November, 
> Tuesday, 13:00
> >> UTC.
> >>
> >> 
> http://www.timeanddate.com/worldclock/fixedtime.html?month=11&
day=13&year=2008&hour=13&min=00&sec=0&p1=0
> >> Agenda will follow in a few hours. We will mainly have a 
> slot to discuss
> >> XMP issues, if there are some new ones, new use cases, the API /
> >> ontology draft proposal and a general time schedule.
> >>
> >> Felix
> >>
> >>
> >>
> >
> >
> >
> 
> 
> 

Received on Tuesday, 18 November 2008 12:26:28 UTC