[emotionml] Summary of the EmotionML Workshop

On October 5th and 6th, 2010, W3C (the World Wide Web Consortium) held
a workshop on Emotion Markup Language.

The detailed minutes from the workshop are available on the W3C Web
server at:
  http://www.w3.org/2010/10/emotionml/minutes.html

Also the HTML version of this summary is available at:
  http://www.w3.org/2010/10/emotionml/summary.html

The goal of the workshop was to collect feedback from the community on
the current EmotionML specification, and we had discussions to clarify
concrete use cases and requirements for each of the following three
categories of possible EmotionML applications:

Category1: Manual annotation of material involving emotionality
Category2: Automatic recognition of emotions from sensors
Category3: Generation of emotion-related system responses

The workshop had 18 attendees from Telecom ParisTech, DFKI, Queens
University of Belfast, Roma Tre University, University of Greenwich,
Dublin Institute of Technology, Loquendo, Deutsche Telekom, Cantoche,
Dwango, nViso, and W3C Team.

During the workshop we had great discussion on actual emotion-related
services as well as the latest emotion research results.  The
presentations at the workshop included a number of practical variants
of possible use cases of EmotionML for all the above three categories:

Category 1 (manual annotation):
--------------------------------

- human annotation of (1) emotional material in "crowd-sourcing"
  scenarios and (2) live video using emotionally expressive
  annotations

Category 2 (automatic recognition):
------------------------------------

- emotion detection from face for consumer analysis (emotional
  reaction to commercials)

Category 3 (automatic recognition):
------------------------------------

- synthesis of expressive speech and animated avatar character which
  expresses emotion information; relationship with SSML/VoiceXML;
  visualization

Also a number of requirements for emotion-ready applications were
discussed, e.g.:

1. Discrete scales
-------------------

- represent discrete scale values in addition to continuous values

2. Multiple categories per emotion
-----------------------------------

- relationship between a component and emotion categories; what if a
  component with more than one emotion categories?

3. Default emotion vocabulary
------------------------------

- after concentrated discussion on pros and cons of default emotion
  vocabularies, we concluded we should stick to the current mechanism
  which doesn't require any default vocabulary

4. Time stamps since program start
-----------------------------------

- time annotations on a time axis with a custom-defined zero point,
  corresponding to the start of a session

5. Extended list of modalities
-------------------------------

- need for an extended list of modalities or channels

The use cases and requirements discussed during the workshop will next
be reviewed by the Emotion subgroup of the W3C Multimodal Interaction
Working Group, and the subgroup will consider how to modify the
existing EmotionML specification.

For the workshop organizing committee;
Kazuyuki Ashimura, the Multimodal Interaction Activity Lead

-- 
Kazuyuki Ashimura / W3C Multimodal & Voice Activity Lead
mailto: ashimura@w3.org
voice: +81.466.49.1170 / fax: +81.466.49.1171

Received on Monday, 13 December 2010 22:45:20 UTC