Response to EmotionML LCWD comment ISSUE-184: Accessibility use cases

Hello Janina,

this is to reply to one of the comments on the EmotionML by W3C WAI PF 
that you sent on 20 June [1], specifically:

ISSUE-184 Accessibility use cases for EmotionML

The group has discussed the issue and proposes the following solution:

ACCEPT

We have included the use case examples you sent into section 1.1 
"Reasons for defining an Emotion Markup Language" of the specification; 
the text describing use cases now reads as listed below.

Please let us know if you agree with this resolution within 14 days, 
i.e. by 9 September. Should we not hear from you by that date, we will 
consider this to represent implicit approval, but explicit feedback is 
always better.

Thanks and best regards,
Marc



Excerpt of current editor's draft of EmotionML including the proposed 
change:

"""
Concrete examples of existing technology that could apply EmotionML include:

    - Opinion mining / sentiment analysis in Web 2.0, to automatically 
track customer's attitude regarding a product across blogs;
    - Affective monitoring, such as ambient assisted living applications 
for the elderly, fear detection for surveillance purposes, or using 
wearable sensors to test customer satisfaction;
    - Character design and control for games and virtual worlds;
    - Social robots, such as guide robots engaging with visitors;
    - Expressive speech synthesis, generating synthetic speech with 
different emotions, such as happy or sad, friendly or apologetic; 
expressive synthetic speech would for example make more information 
available to blind and partially sighted people, and enrich their 
experience of the content;
    - Emotion recognition (e.g., for spotting angry customers in speech 
dialog systems);
    - Support for people with disabilities, such as educational programs 
for people with autism. EmotionML can be used to make the emotional 
intent of content explicit. This would enable people with learning 
disabilities (such as Asperger's Syndrome) to realise the emotional 
context of the content;
    - EmotionML can be used for media transcripts and captions. Where 
emotions are marked up to help deaf or hearing impaired people who 
cannot hear the soundtrack, more information is made available to enrich 
their experience of the content.
"""


[1] http://lists.w3.org/Archives/Public/www-multimodal/2011Jun/0004.html

-- 
Dr. Marc Schröder, Senior Researcher at DFKI GmbH
Project leader for DFKI in SSPNet http://sspnet.eu
Team Leader DFKI TTS Group http://mary.dfki.de
Editor W3C EmotionML Working Draft http://www.w3.org/TR/emotionml/
Portal Editor http://emotion-research.net

Homepage: http://www.dfki.de/~schroed
Email: marc.schroeder@dfki.de
Phone: +49-681-85775-5303
Postal address: DFKI GmbH, Campus D3_2, Stuhlsatzenhausweg 3, D-66123 
Saarbrücken, Germany
--
Official DFKI coordinates:
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
Geschaeftsfuehrung:
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313

Received on Friday, 26 August 2011 10:00:27 UTC