W3C home > Mailing lists > Public > public-xg-mmsem@w3.org > December 2006

RE: Another question on the Tagging use case

From: George Anadiotis <George.Anadiotis@cwi.nl>
Date: Sun, 3 Dec 2006 16:02:30 +0100 (CET)
Message-ID: <59621.>
To: "Hobson Paola-BPH001" <Paola.Hobson@motorola.com>
Cc: "MMSem-XG Public List" <public-xg-mmsem@w3.org>

Hello Paola

Thank you for your interest in the UC. I find your comments very useful,
as it is indeed the case even in mobile phones that voice tagging could be
quite handy. This is even more valid when we are talking about devices
that do not even have a keyboard, such as cameras e.g. It would be nice to
be able to instantly tag content as early as possible in the media
production process [1].

Having said that however, i think that this addresses a higher level of
the problem we are currently trying to deal with - namely
interoperability. To be honest, we did not previously think about the
possibility of having audio annotations as well, so it was not addressed
in the UC. But i imagine for example that we can have voice recognition
software that will translate voice tags to their textual manifestation,
used to populate a tagging ontology. I see this as an application that can
be built on top of the interoperability layer we are currently trying to

And to answer your last question, yes it is possible to have multilingual
tags using SKOS (through the use of the prefLabel/altLabel elements and
the language attribute).

George Anadiotis

[1] Lynda Hardman, Canonical Processes of Media Production. In:
Proceedings of the ACM Workshop on Multimedia for Human Communication -
>From Capture to Convey (MHC 05) November 2005. Available at:
Received on Sunday, 3 December 2006 15:02:43 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:01:24 UTC