W3C home > Mailing lists > Public > public-xg-mmsem@w3.org > November 2006

Another question on the Tagging use case

From: Hobson Paola-BPH001 <Paola.Hobson@motorola.com>
Date: Thu, 30 Nov 2006 14:42:09 -0000
Message-ID: <3304035C0D1B2240BF04BB70D0C8AC94BC5FA2@zuk35exm64.ds.mot.com>
To: "MMSem-XG Public List" <public-xg-mmsem@w3.org>
Dear George, Susanne, Thomas
 
Relating to the practical situation in the Tagging use case, it seems
that the use case assumes that users will tag their content after they
have uploaded it, and that they will have access to terminals that
support keyboards.  However, users may want to real-time tag their
content when they acquire it.  They may be mobile and therefore using
limited interaction devices, and typing textual tags on a mobile phone
(or a wi-fi enabled camera) is not easy.  Access to suggested tags such
as proposed by Automatic Linguistic Indexing of Pictures - Real Time (
www.alipr.com) is a partial solution but this would only work for simple
content where appropriate tags already exist.
 
Another possibility would be to use voice tags which can be added when
the user captures the content, which leads to my question on
interoperability.  Does the use case imply textual only tags?  Does it
make any difference if voice tags are applied?  Does SKOS support
multi-lingual tags?
 
Paola
 
Dr P M Hobson
Director, Personalization & Knowledge 
Motorola Labs
Jays Close
Basingstoke
RG22 4PD
EMail : Paola.Hobson@motorola.com <mailto:Paola.Hobson@motorola.com> 
Received on Thursday, 30 November 2006 14:43:31 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:21:20 GMT