W3C

Image annotation on the Semantic Web

Editors' Draft $Date: 2005/09/30 15:04:34 $ $Revision: 1.13 $

$Authors: jrvosse, gstam, raphael$

This version:
N/A
Latest version:
N/A
Previous version:
N/A
Editors:
TO BE REVISED AT THE END
Giorgos Stamou, IVML, National Technical University of Athens, <gstam@softlab.ece.ntua.gr>
Jacco van Ossenbruggen, Center for Mathematics and Computer Science (CWI), <Jacco.van.Ossenbruggen@cwi.nl>
Raphaël Troncy, Center for Mathematics and Computer Science (CWI), <Raphael.Troncy@cwi.nl>
Jeff Pan, University of Manchester, <pan@cs.man.ac.uk>
Additional Contributors and Special Thanks to:
TO BE REVISED AT THE END
Jane Hunter, DSTC, <jane@dstc.edu.au>
Guus Schreiber, VU,<schreiber@cs.vu.nl>
John Smith, IBM,  <rsmith@watson.ibm.com>
Jeremy Caroll, HP, <jjc@hplb.hpl.hp.com>
Vassilis Tzouvaras, IVML, National Technical University of Athens, <tzouvaras@image.ece.ntua.gr>
Nikolaos Simou, IVML, National Technical University of Athens, <nsimou@image.ece.ntua.gr>
Christian Halaschek-Wiener, UMD, <halasche@cs.umd.edu>

Copyright © 2003 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.


Abstract

Many applications that involve multimedia content make use of some form of metadata that describe this content. The present document aims at providing guidelines for using Semantic Web languages and technologies in order to create, store, manipulate, interchange and process image metadata. It gives a number of use cases to exemplify the use of Semantic Web technology for image annotation, an overview of RDF and OWL vocabularies developed for this task and an overview of relevant tools.

Note that many approaches to image annotation predate Semantic Web technology. Interoperability between these technologies and RDF and OWL-based approaches will be addressed in a future document.

Target Audience

Institutions and organizations with research and standardization activities in the area of multimedia, professional (museums, libraries, audiovisual archives, media production and broadcast industry, image and video banks) and non-professional (end-users) multimedia annotators.

Objectives

Status of this document

This is a public (WORKING DRAFT) Working Group Note produced by the Multimedia Annotation in the Semantic Web Task Force of the W3C Semantic Web Best Practices & Deployment Working Group, which is part of the W3C Semantic Web activity.

Discussion of this document is invited on the public mailing list public-swbp-wg@w3.org (public archives). Public comments should include "comments: [MM]" at the start of the Subject header.

Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress. Other documents may supersede this document.

Table of Contents

1. Introduction

TO BE DONE: Corrections, delete and add material

Before starting any image annotation project, one should be aware that image annotation is notoriously difficult. Trade offs along several dimensions make the task difficult:

2. Use Cases

Use case: Cultural Heritage

A museum in fine arts has asked a specialized company to produce high resolution digital scans of the most important art works of their collections. The museum's quality assurance requires the possibility to track when, where and by whom every scan was made, with what equipment, etc. The museum's internal IT departement, maintainaing the underlying image database, needs the size, resolution and format of every resulting image. It also needs to know the repository ID of the original work of art. The company developing the museum's website additionally requires copyright information (that varies for every scan, depending on the age of the original work of art and the collection it originates from). It also want to give the users of the website access to the collection, not only based on the titles of the paintings and names of their painters, but also based on the topics depicted ('sun sets'), genre ('self portraits'), style ('post-impressionism'), period ('fin de siecle'), region ('west european').

Use case: Television news archive

Audiovisual archives manage very large multimedia databases. For instance, INA, the French Audiovisual National Institute, has been archiving TV documents for 50 years and radio documents for 65 years and stores more than 1 million hours of broadcast programs. The images and sound archives kept at INA are either intended for professional use (journalists, film directors, producers, audiovisual and multimedia programmers and publishers, in France and worldwide) or communicated for research purposes (for a public of students, research workers, teachers and writers). In order to allow an efficient access to the data stored, most of the parts of these video documents are described and indexed by their content. The global multimedia information system should then be fine-grain enough detailed to support some very complex and precise queries. For example, a journalist or a film director client might ask for an excerpt of a previously broadcasted program showing the first goal of a given football player in its national team, scored with its head. The query could additionally contain some more technical requirements such that the goal action should be available according to both the front camera view and the reverse angle camera view. Finally, the client might or might not remember some general information about this football game, such that the date, the place and the final score.

Use case: Media Production Services

A media production house requires several web services in order to organise and implement its projects. Usually, the pre-production and production start from location, people, image and footage search and retrieval in order to speed up the process and reduce as much as possible the cost of the production. For that reason, several multimedia archives (image and video banks, location management databases, casting houses etc) provide the above information through the web. Everyday, media producers, location managers, casting managers etc, are looking in the above archives in order to find the appropriate resources for their project. The quality of this search and retrieval process directly affects the quality of the service that the archives provide to the users. In order to facilitate the above process, the annotation of image content should make use of the Semantic Web technologies, also following the multimedia standards in order to be interoperable with other archves, thus providing a unified framework for media production resource allocation.

3. Vocabularies for image annotation

MPEG-7 translations to RDFS and OWL

The "Multimedia Content Description" standard, widely known as MPEG-7 aims to be the standard for describing any multimedia content. MPEG-7 standardizes tools or ways to define multimedia Descriptors (Ds), Description Schemes (DSs) and the relationships between them. The descriptors correspond to the data features themselves, generally low-level features such as visual (e.g. texture, camera motion) or audio (e.g. melody), while the description schemes refer to more abstract description entities. These tools as well as their relationships are represented using the Description Definition Language (DDL), the core part of the language. The W3C XML Schema recommendation has been adopted as the most appropriate schema for the MPEG-7 DDL. Note that several extensions (array and matrix datatypes) have been added in order to satisfy specific MPEG-7 requirements.

The set of MPEG-7 XML Schemas define 1182 elements, 417 attributes and 377 complex types which is usually seen as a difficulty when managing MPEG-7 descriptions. Moreover, several works have already pointed out the lack of formal semantics of the standard that could extend the traditionnal text descriptions into machine understandable ones. These attempts that aim to bridge the gap between the multimedia community and the Semantic Web are detailed below.

MPEG-7 Upper MDS Ontology by Hunter et al.

Link: http://maenad.dstc.edu.au/slittle/mpeg7.owl

Summary: Chronologically the first one, this MPEG-7 ontology was firstly developped in RDFS [1], then converted into DAML+OIL, and is now available in OWL. This is an OWL Full ontology (note: execpt for the corrections of three small mistakes inside the OWL file. The &xsd;nil should be replace by &rdf;nil, otherwise it is not OWL valid).

The ontology covers the upper part of the Multimedia Description Scheme (MDS) part of the MPEG-7 standard. It consists in about 60 classes and 40 properties.

References:

MPEG-7 MDS Ontology by Tsinaraki et al.

Link: http://elikonas.ced.tuc.gr/ontologies/av_semantics.zip.

Summary: Starting from the previous ontology, this MPEG-7 ontology covers the full Multimedia Description Scheme (MDS) part of the MPEG-7 standard. It contains 420 classes and 175 properties. This is an OWL DL ontology.

References:

MPEG-7 Ontology by DMAG

Link: http://dmag.upf.edu/ontologies/mpeg7ontos/.

Summary: This MPEG-7 ontology has been produced fully automatically from the MPEG-7 standard in order to give it a formal semantics. For such a purpose, a generic mapping XSD2OWL has been implemented. The definitions of the XML Schema types and elements of the ISO standard have been converted into OWL definitions according to the table given in [3]. This ontology could then serve as a top ontology thus easing the integration of other more specific ontologies such as MusicBrainz. The authors have also proposed to transform automatically the XML data (instances of MPEG-7) into RDF triples (instances of this top ontology).

This ontology aims to cover the whole standard and it thus the most complete one (with respect to the previous mentioned). It contains finally 2372 classes and 975 properties. This is an OWL Full ontology since it employs the rdf:Property construct to cope with the fact that there are properties that have both datatype and object type ranges.

References:

INA Ontology

Link: store this ontology on CWI for ease of reference?

Summary: This ontology is not really an MPEG-7 ontology since it does not cover the whole standard. It is rather a core audio-visual ontology inspired by several terminologies, either standardized (like MPEG-7 and TV Anytime) or still under development (ProgramGuideML). Furthermore, this ontology benefits from the practices of the French INA institute, the English BBC and the Italian RAI channels, which have also developed a complete terminology for describing radio and TV programs.

This core ontology contains currently 1100 classes and 220 properties and it is represented in OWL Full

References:


Visual Ontologies

The MPEG-7 standard is divided into several parts reflecting the various media one can find in multimedia content. This section focus on various attempts to design ontologies that correspond to the visual part of the standard.

2.1 - aceMedia Visual Descriptor Ontology

Link: http://www.acemedia.org/aceMedia/reference/resource/index.html, the current version is 9.0.

Summary: The Visual Descriptor Ontology (VDO) developed within the aceMedia project for semantic multimedia content analysis and reasoning, contains representations of MPEG-7 visual descriptors and models Concepts and Properties that describe visual characteristics of objects. By the term descriptor we mean a specific representation of a visual feature (color, shape, texture etc) that defines the syntax and the semantics of a specific aspect of the feature. For example, the dominant color descriptor specifies among others, the number and value of dominant colors that are present in a region of interest and the percentage of pixels that each associated color value has. Although the construction of the VDO is tightly coupled with the specification of the MPEG-7 Visual Part, several modifications were carried out in order to adapt to the XML Schema provided by MPEG-7 to an ontology and the data type representations available in RDF Schema

References:

2.2 - mindswap Image Region Ontology

Link: http://www.mindswap.org/2005/owl/digital-media.

Summary:

References:

2.3 - Hollink Visual Ontology

Link: http://www.cs.vu.nl/~laurah/VO/visualWordnetschema2a.rdfs.

Summary:

References:

4. Examples of image annotations on the Semantic Web

TO BE DONE: Short description and categorisation of the image annotations

5. Tools

TO BE DONE: Short description and categorisation of important tools

6. Other (non-RDF) Relevant Work

TO BE DONE: Short description and categorisation of important relevant work

7. Relevant Projects and Events

TO BE DONE: Short description and categorisation of important projects and events

8. Acknowledgments

  Thanks to ...

Appendix A. Informative References

[Hunter]
J. Hunter. Adding Multimedia to the Semantic Web — Building an MPEG-7 Ontology. In International Semantic Web Working Symposium (SWWS), Stanford University, California, USA, July 30 - August 1, 2001. Available at http://www.semanticweb.org/SWWS/program/full/paper59.pdf

[Stamou05]
G. Stamou and S. Kollias (eds). Multimedia Content and the Semantic Web: Methods, Standards and Tools. John Wiley & Sons Ltd, 2005.

[Troncy2003]
R. Troncy. Integrating Structure and Semantics into Audio-visual Documents. In Second International Semantic Web Conference (ISWC2003), pages 566 – 581, Sanibel Island, Florida, USA, October 20-23, 2003. Springer-Verlag Heidelberg. Available at  http://springerlink.metapress.com/(2ix4layxvpw4wd555dzsym55)/media/G1GHNMWQMPWHLR6LWVTP/Contributions/U/3/T/X/U3TXQY8BR03TE7RG.pdf
[Ossenbruggen04]
Jacco van Ossenbruggen, Frank Nack, and Lynda Hardman. That Obscure Object of Desire: Multimedia Metadata on the Web (Part I). In: IEEE Multimedia 11(4), pp. 38-48 October-December 2004
[Ossenbruggen05]
Frank Nack, Jacco van Ossenbruggen, and Lynda Hardman/ That Obscure Object of Desire: Multimedia Metadata on the Web (Part II). In: IEEE Multimedia 12(1), pp. 54-63 January-March 2005