W3C home > Mailing lists > Public > public-cognitive-a11y-tf@w3.org > February 2016

EmotionML description

From: Deborah Dahl <dahl@conversational-technologies.com>
Date: Mon, 22 Feb 2016 12:04:40 -0500
To: "public-cognitive-a11y-tf" <public-cognitive-a11y-tf@w3.org>
Message-ID: <001301d16d93$1f0a64d0$5d1f2e70$@conversational-technologies.com>
Here’s a short description of EmotionML and its applicability to some
accessibility use cases (goes with my ACTION-151
https://www.w3.org/WAI/PF/cognitive-a11y-tf/track/actions/151)

 

Emotion Markup Language (EmotionML)[1] is an XML-based  W3C standard for
representing emotions. It can be used in connection with emotion recognition
or emotion generation software to provide a standard, interoperable way to
represent emotions. 

The EmotionML specification doesn’t require the use of a specific emotion
vocabulary, in order to leave open the possibility of exploring different
vocabularies for research purposes. However, an accompanying Note [2] list
the major vocabularies used in affective applications, and describes a
process for registering new vocabularies. 

As an example, the EmotionML markup for “satisfied” would look like this:

 

<emotion
category-set="http://www.w3.org/TR/emotion-voc/xml#everyday-categories">

    <category name="happy"/>

</emotion>

 

 

This example shows that the emotion was “happy”, using the
“everyday-categories” markup defined in the emotion vocabularies note.

Multiple emotions at different intensities can be represented as well, for
example:

 

<emotion category-set="http://www.w3.org/TR/emotion-voc/xml#big6">

    <category name="sadness" value="0.3"/>

    <category name="anger" value="0.8"/>

    <category name="fear" value="0.3"/>

</emotion>

 

This example represents “sadness”, “anger” and “fear” at different
intensities, using the “big6” vocabulary.

It is also possible to annotate media with changing emotions over time.

Accessibility use cases could include:

1. Text or images could be annotated with EmotionML markup to make the
author’s intent clear to users who may have difficulty understanding the
emotion from text or image alone.

2. EmotionML could be used with text to speech to synthesize speech
expressing emotions, making screen readers sound more natural.

3. Images showing emotions could be annotated with EmotionML.[3]

EmotionML could be inserted manually by an author, or it could be used along
with automatic emotion recognition and generation software to automatically
(probably with some inaccuracies) markup text or images with the emotions
they’re intended to convey. 

 

[1]          M. Schröder, et al. (2009). Emotion Markup Language (EmotionML)
1.0 Available: http://www.w3.org/TR/emotionml/

[2]          F. Burkhardt, et al. (2014, 5 February). Vocabularies for
EmotionML. Available: http://www.w3.org/TR/emotion-voc/ 

[3]          A. Hilton. (2015, January 11). EmotionAPI 0.2.0. Available:
https://github.com/Felsig/Emotion-API

 
Received on Monday, 22 February 2016 17:04:29 UTC

This archive was generated by hypermail 2.3.1 : Monday, 22 February 2016 17:04:30 UTC