Re: [MMSEM] Image Annotation on the SW review

Dear Stamatia,

> Dear all, with respect to ACTION: Ovidio, Stamatia to review
> http://www.w3.org/2001/sw/BestPractices/MM/image_annotation.html
> before mid October teleconf [recorded in
> http://www.w3.org/2006/09/14-mmsem-minutes.html#action02]you can find
> below my comments.

Thanks for your review. Regarding your comments:

> Thus, under this context, a first remark would be that the added value
> brought by using sw technologies should be highlighted more, actually
> by better linking the content of the document itself. For example, in
> the introduction, the principal issues related to image annotation
> (and actually resource annotation in general) are very accurately
> presented, and then the notion of explicit and machine prossesable
> semantics introduced by sw is considered. Of course, it's quite
> obvious that explicit semantics entail certain advantages both in all
> types (content, media..) of annotations, but it'd be nice to show what
> the sw functionalities specifically add. The BigImage example given in
> owl is nice, but including for instance an example where owl is used
> to represent the semantics of an image decomposition into segments,
> some depicts property between segments and concepts, and, image and
> concepts, and having as available metadata that certain segments
> depict parts of a human body, then it is shown that sw enables to
> infer that this specific image depicts a person. this example might be
> too complitacated, but the idea is to be careful not just show that sw
> can be used to annotate images, which can be done using any xml  or
> whatever based standard or vocabulary, but that there is some added
> value.

The presentation of SW and image annotation issues has been improved.
Now, the example shows a concrete example why having resources from
ontologies as values for annotation properties is better than plain
string. The example highlight that Ganesh is an Indian Elephant, part of
a whole family in the species tree. Your proposed example would also
fit, but might be too complex for being  enough pedagogical.

>  The selection of the use cases is indeed very good; they cover
> differing requirements in annotation. I'm not that convinced that the
> four categories classification is the most appropriate, as there are
> several, not orthogonal, dimensions along which this categorization
> could take place, such as professional versus personal, the type of
> annotations (global, region-based, low-level, descriptive,
> media-related and so on), the topics depicted, etc

A disclaimer has been added saying that this categorization is just one
of the many one could make.

> .Section 3 briefly mentions vocabularies for image annotation, and
> more specifically mpeg7 and VRA. Since the discrimination among the
> different types of vocabulaties needed for image annotation has been
> clearly established earlier in the document, this section should
> either include some brief overview of the different categories'
> vocabularies (e.g., the structural ontologies by acemedia and minswap
> could be too such vocabularies, tv-antyime, etc.) or named
> differently. I would suggest categorizing them depending on the
> functionality covered, and mention the most representative ones
> pointing for further deatils on the Vocabularies Overview document.
> And as a general comment, care should be taken on how the terms
> vocabulary, ontology and metadata standard are used throughout the
> document. Teh last paragraph on the need sometimes present in image
> annotation to refer to specific parts rather than the whole image,
> should be moved imho in the image annotation issues section (1.1).

The section 3 makes now more systematically reference to the published
"Multimedia Vocabularies on the Semantic Web" document. It overviews
only vocabularies that are standards, thus the reason  for non
describing "other" vocabularies for example developped in projects.
However, a complete description of them is available in
http://www.w3.org/2005/Incubator/mmsem/XGR-vocabularies/ and the reader
is encouraged to click :-)

> Section 4, should be analogous to section 3 but for annotation tools.
> Not all aspects of existing tools functionalities need to be
> considered, and not in great detail either.

Right! A systematic link has been made with
http://www.w3.org/2005/Incubator/mmsem/wiki/Tools_and_Resources

> Some typos (complementing George's):

All typos have been corrected (apart from the lyricism kept :-).
Latest version of the document is at:
http://www.w3.org/2005/Incubator/mmsem/XGR-image-annotation/
Best regards.

    Raphaël

--
Raphaël Troncy
CWI (Centre for Mathematics and Computer Science),
Kruislaan 413, 1098 SJ Amsterdam, The Netherlands
e-mail: raphael.troncy@cwi.nl & raphael.troncy@gmail.com
Tel: +31 (0)20 - 592 4093
Fax: +31 (0)20 - 592 4312
Web: http://www.cwi.nl/~troncy/

Received on Tuesday, 14 August 2007 12:19:28 UTC