W3C

Semantic Web Image Annotation Use Case: Large Scale Image Repository - NASA

Status: Editor's Draft $Id: nasa-use-case.html,v 1.8 2005/10/25 11:10:34 chalasch Exp $

Description

Many organizations maintain extremely large-scale image collections. The National Aeronautics and Space Administration (NASA) is such an example, which has hundreds of thousands of images, stored in different formats, levels of availability and resolution, and with associated descriptive information at various levels of detail and formality. Such an organization also generates thousands of images on an ongoing basis that are collected and cataloged. Thus, a mechanism is needed to catalog all the different types of image content across various domains. Information about both the image itself (e.g., its creation date, dpi, source) and about the specific content of the image is required. Additionally, the associated metadata must be maintainable and extensible so that associated relationships between images and data can evolve cumulatively. Lastly, management functionality should provide mechanisms flexible enough to enforce restriction based on content type, ownership, authorization, etc.

Possible Semantic Web-based solution

One possible solution for such image management requirements is an annotation environment that enables users to annotate information about images and/or their regions using concepts in ontologies (OWL and/or RDFS). More specifically, subject matter experts will be able to assert metadata elements about images and their specific content. Multimedia related ontologies can be used to localize and represent regions within particular images. These regions can then be related to the image via a depiction/annotation property. This functionality can be provided, for example, by the MINDSWAP digital-media ontology (to represent images, image regions, etc.), in conjunction with FOAF (to assert image depictions). Additionally, in order to represent the low level image features of regions, the aceMedia Visual Descriptor Ontology can be used.

Domain Specific Ontologies

In order to describe the content of such images, a mechanism to represent the domain specific content depicted within them is needed. For this use case, domain ontologies that define space specific concepts and relations can be used. Such ontologies are freely available and include, but are not limited to the following:

Visual Ontologies

As discussed above, this scenario also requires the ability to state that images (and possibly their regions) depict certain things. One possible way to accomplish this is to use a combination of FOAF and the MINDSWAP digital-media ontology. More specifically, image depictions can be asserted via a depiction property (a sub-property of foaf:depiction) defined in the MINDSWAP Digital Media ontology. Thus, images can be semantically linked to instances defined on the Web. Image regions can defined via an ImagePart concept (also defined in the MINDSWAP Digital Media ontology). Additionally, regions can be given a bounding box by using a property named svgOutline. Essentially SVG outlines (SVG XML literals) of the regions can be specified using this property. nasa-use-case.rdf contains a variety of RDF/XML annotations of a picture of the Apollo 7 Saturn shuttle launch. More specifically, the assertions include that the image depicts the Apollo 7 launch, the Apollo 7 Saturn IB space vehicle is depicted in a rectangular region around the rocket, etc.

In order to represent the low level features of images, the aceMedia Visual Descriptor Ontology can be used. This ontology contains representations of MPEG-7 visual descriptors and models Concepts and Properties that describe visual characteristics of objects. For example, the dominant color descriptor can be used to describe the number and value of dominant colors that are present in a region of interest and the percentage of pixels that each associated color value has.

Available Annotation Tools

Existing toolkits, such as [PhotoStuff] and [M-OntoMat-Annotizer], currently provide graphical environments to accomplish the annotation tasks mentioned above. Using such tools, users can load images, create regions around parts of the image, automatically extract low-level features of selected regions (via M-OntoMat-Annotizer), assert statements about the selected regions, etc. Additionally, the resulting annotations can be exported as RDF/XML (shown in nasa-use-case.rdf), thus allowing them be shared, indexed, and used by advanced annotation-based browsing (and searchable) environments.

References

[M-OntoMat-Annotizer]
M-OntoMat-AnnotizerProject Homepage
[PhotoStuff]
PhotoStuff Project Homepage