- From: George Anadiotis <George.Anadiotis@cwi.nl>
- Date: Sun, 3 Dec 2006 16:02:30 +0100 (CET)
- To: "Hobson Paola-BPH001" <Paola.Hobson@motorola.com>
- Cc: "MMSem-XG Public List" <public-xg-mmsem@w3.org>
Hello Paola Thank you for your interest in the UC. I find your comments very useful, as it is indeed the case even in mobile phones that voice tagging could be quite handy. This is even more valid when we are talking about devices that do not even have a keyboard, such as cameras e.g. It would be nice to be able to instantly tag content as early as possible in the media production process [1]. Having said that however, i think that this addresses a higher level of the problem we are currently trying to deal with - namely interoperability. To be honest, we did not previously think about the possibility of having audio annotations as well, so it was not addressed in the UC. But i imagine for example that we can have voice recognition software that will translate voice tags to their textual manifestation, used to populate a tagging ontology. I see this as an application that can be built on top of the interoperability layer we are currently trying to provide. And to answer your last question, yes it is possible to have multilingual tags using SKOS (through the use of the prefLabel/altLabel elements and the language attribute). Regards George Anadiotis [1] Lynda Hardman, Canonical Processes of Media Production. In: Proceedings of the ACM Workshop on Multimedia for Human Communication - >From Capture to Convey (MHC 05) November 2005. Available at: http://www.cwi.nl/~media/publications/MHC05lynda.pdf
Received on Sunday, 3 December 2006 15:02:43 UTC