- From: Raphaël Troncy <Raphael.Troncy@cwi.nl>
- Date: Mon, 06 Mar 2006 17:07:33 +0100
- To: "Uschold, Michael F" <michael.f.uschold@boeing.com>
- CC: public-swbp-wg@w3.org
Dear Mike, We have now addressed all the comments you have made in your second review of the "Image annotation on the Semantic Web" editor's draft. The latest version is available at [1] (Editors' Draft $Date: 2006/03/06 16:05:33 $ $Revision: 1.148 $) Thanks for having passed to your colleague the document and for having forwarded us your discussion. Below, some more comments that complement your previous answers and that you may pass to him. In the second email, I will reply to your own comments. > [...] > > 2) There seems no intent to allow multiple > vocabularies to be associated to a single > image. At least, I couldn't find that. I would like to say "all the contrary"! There are several use cases for which we emphasize that multiple vocabularies are generally needed in the description. For instance the Cultural Heritage use case (section 5.2): "Many of the requirements of the use case described in Section 2.2 can be met by using the vocabulary developed by the VRA in combination with domain-specific vocabularies such as Getty's AAT and ULAN." Or the "Television News Archive use case (section 5.3): "The use case described in Section 2.3 is typically one that requires the use of multiple vocabularies." (DublinCore, MPEG-7, TV-Anytime, domain-specific ontologies). > And > really, it would be best if there was a way > to cross-reference between vocabularies > so if someone used a term in one not employed > during indexing, a search engine could widen > the search to known synonyms in other > supported vocabularies to find a match then > serve up the image as a result along with > its matched term. In the Cultural Heritage use case, the VRA annotations could also be represented using the Dublin Core since all the elements of VRA Core have either direct mappings to comparable fields in Dublin Core or are defined as specializations of one or more DC elements. Therefore, such a scenario of interoperability between vocabularies is possible. The specific problems (and best practices) are addressed into a second document, "Semantic Web Image Annotation Interoperability" [2] that is currently in a very preliminary state. > [...] > > 4) There needs to be a way to allow external > and internal mapping between metatags. Few > file formats will bend to support XML internally > but they might change their internal tagging > to conform with an external format standard. Again, this specific issue is addressed in the "Semantic Web Image Annotation Interoperability" [2]. > If that was coupled with the ability to reference > an external repository of metadata then we > would be able to fully annotate without having > to embed everything in the image file. I tend to fully agree. We recommend also as a best practice to dissociate the metadata from the image file. However, I vould like to point here a common problem, not yet solved or at least that has still not reach a consensus solution: how to reference a part of an image in a URI? If a whole image can be referenced (and resolved) by a URI, there is no standard means to identify a specific part of an image (any geometrical shape) in a URI. SVG proposes a solution (that is discussed in the section 5.4), but it comes to an indirection: a specific XML snippet describes the localization of a specific part of an image, and the annotation of this part is about this XML snippet. Note that this problem exists also for a temporal segment localization in a video (even if there is now a Temporal URI proposal [3]) and worst, for a spatio-temporal segment localization (for instance, tracking a moving object during a temporal interval). > [...] > > That's it for off the top of my head. Thanks for your comments. Sincerely. Raphaël [1] http://www.w3.org/2001/sw/BestPractices/MM/image_annotation.html [2] http://www.w3.org/2001/sw/BestPractices/MM/interop.html [3] Specifying time intervals in URI queries and fragments of time-based Web resources, http://www.annodex.net/TR/draft-pfeiffer-temporal-fragments-03.html -- Raphaël Troncy CWI (Centre for Mathematics and Computer Science), Kruislaan 413, 1098 SJ Amsterdam, The Netherlands e-mail: raphael.troncy@cwi.nl & raphael.troncy@gmail.com Tel: +31 (0)20 - 592 4093 Fax: +31 (0)20 - 592 4312 Web: http://www.cwi.nl/ins2/
Received on Monday, 6 March 2006 16:07:54 UTC