W3C home > Mailing lists > Public > public-media-annotation@w3.org > November 2008

Re: my token about the "3 or more layer" structure for the ontology

From: Pierre-Antoine Champin <pchampin@liris.cnrs.fr>
Date: Thu, 20 Nov 2008 12:10:32 +0000
Message-ID: <492553B8.9000100@liris.cnrs.fr>
To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
CC: public-media-annotation@w3.org

Hi Silvia,

I agree with you that the normalization of metadata (i.e. to repeat or
not to repeat) is an implementation issue.

However, I don't think that the de-normalization related to client-side
processing should lose too much (we have yet to quantify how much is
"too much") of the *structure* of the metadatata.

Basically, if you simply merge the metadata of the event with the
metadata of the videos into a flat structure, thus loosing the identity
of the event in the process, the client may not be able to
discover/recognize other videos about the *same* event... which sounds
like a bad idea.


Silvia Pfeiffer a écrit :
> Hi Ruben,
> It is always a matter of use cases.
> When we talk about management of collections, there will be overlap
> between the annotations of different files, which can be handled more
> efficiently (in a database sense: normalise your schema).
> However, if you receive an individual media resource, you want all of
> its annotations to be available with the media resource, i.e. you want
> an "intelligent" media object that can tell you things about itself.
> I don't see these things as separate. Let's take a real-world example.
> Let's assume I have a Web server with thousands of videos. They fall
> into categories and within categories into event, where each video
> within an event has the same metadata about the event. On the server,
> I would store the metadata in a database. I would do normalisation of
> the data and just store the data for each event once, but have a
> relationship table for video-event-relationships. Now, a Web Browser
> requests one of the videos for playback (or a search engine comes
> along and asks about the metadata for a video). Of course, I go ahead
> and extract all related metadata about that video from the database
> and send it with the video (or in the case of the search engine:
> without the video). I further have two ways of sending the metadata: I
> can send it in a text file (which is probably all the search engine
> needs), or I can send it multiplexed into the video file, e.g. as a
> metadata header (e.g. MP3 has ID3 for this, Ogg has vorbiscomment,
> other file formats have different metadata headers).
> I don't think we need to overly concern ourselves with whether we
> normalise our data structure. This is an "implementation" issue. We
> should understand the general way in which metadata is being handled
> as in the example above and not create schemas that won't work in this
> and other scenarios. But we should focus on identifying which
> information is important to keep about a video or audio file.
> Cheers,
> Silvia.
> On Thu, Nov 20, 2008 at 12:01 AM, Ruben Tous (UPC) <rtous@ac.upc.edu> wrote:
>> Dear Véronique, Silvia, all,
>> I agree with both of you in that the need of multiple description levels is
>> only related to a small subset of use cases, basically to those related to
>> the management of groups of resources (e.g. digital asset management
>> systems, user media collections, etc.). Instead, we are (I guess) focused in
>> embedded annotations in individual resources.
>> However, I think that there are solutions which cover both cases, the simple
>> and the complex one. For instance, we could embed the following annotation
>> within an MPEG video:
>> <mawg:Video rdf:ID=http://example.org/video/01">
>> <mawg:title>astronaut loses tool bag during spacewalk </mawg:title>
>> <mawg:creator>John Smith</mawg:creator>
>> </mawg:Video>
>> <mawg:Resource rdf:ID="http://example.org/resource/01">
>> <mawg:format>FLV</mawg:format>
>> <mawg:filesize>21342342</mawg:filesize>
>> <mawg:duration>PT1004199059S</mawg:duration>
>> </ mawg:videoID rdf:resource="http://example.org/video/01">
>> </mawg:Resource>
>> It is structured and it offers 2 abstraction levels, but it can be
>> serialized like a plain record. When appearing in isolated resources, the
>> high-level annotation ("Video" in this case) would be repeated. When
>> appearing within a collection's annotation the "Video" annotation would
>> appear just once.
>> It is not so different than in XMP. Take to the following XMP example...
>> http://www.w3.org/2008/WebVideo/Annotations/wiki/images/8/8a/Xmp_example.xml
>> Best regards,
>> Ruben
>> ----- Original Message ----- From: <vmalaise@few.vu.nl>
>> To: <public-media-annotation@w3.org>
>> Sent: Wednesday, November 19, 2008 11:27 AM
>> Subject: my token about the "3 or more layer" structure for the ontology
>>> Hi everyone,
>>> I was at first very much in favor of an ontology that would distinguish
>>> different levels of media documents, like
>>> "work-manifestation-instance-item",
>>> but after reading this email from the list:
>>> http://lists.w3.org/Archives/Public/public-media-annotation/2008Nov/0076.html
>>> I agreed with the fact that we would probably only need a simple structure
>>> in
>>> our case, that multi-level structures were meant for linking different
>>> entities
>>> that have different status together: if we aim for linking the
>>> descriptions of a
>>> single item between different vocabularies, we need to specify if the
>>> single
>>> item is a work_in_XX_vocabulary, more likely a
>>> manifestation_in_XX_vocabulary
>>> (see note 1 below), to give its "type", and if people/use cases want to
>>> link
>>> this single item to other related works, manifestations, instances or
>>> items,
>>> they can use the framework defined in the schemas reviewed in
>>> http://www.w3.org/2008/WebVideo/Annotations/wiki/MultilevelDescriptionReview
>>> and use these properties for completing their description.
>>> So we would need a property like "has_type" to link a single description's
>>> identifier to the correct level of multilevel description schemes.
>>> I changed my mind think that only one "family" of use cases would need
>>> more
>>> levels, that they are somehow context dependent (and could thus be
>>> considered as
>>> requirements for a family of use cases), but of course if it turns out
>>> that more
>>> that one family of use cases needs this distinction, then we should
>>> consider
>>> going for a multilevel structure. Anyway, we would need to map informally
>>> the
>>> way these levels are expressed, in order to provide possible relevant
>>> "types"
>>> for the description of each single element.
>>> note 1: by specifying the different names of the relevant Concepts/terms
>>> in
>>> schemes like VRA, XMP etc., we would informally define a semantic
>>> equivalence
>>> between the ways these schema express these levels of description. It
>>> would look
>>> like:
>>> <metadataFile>
>>> <id="identifier">
>>> <hasType xmpMM:InstanceID, vra:image, frbr:item>
>>> </metadataFile>
>>> I think that the table
>>> http://www.w3.org/2008/WebVideo/Annotations/wiki/FeaturesTable
>>> is a very valuable tool for people to express their ideas about it, thank
>>> you
>>> very much Ruben for designing it!
>>> Best regards,
>>> Véronique
Received on Thursday, 20 November 2008 12:11:15 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:24:30 UTC