W3C home > Mailing lists > Public > public-xg-emotion@w3.org > May 2008

Re: [EMOXG] Deliverable report published as first draft: Emotion Markup Language: Requirements with Priorities

From: Marc Schroeder <schroed@dfki.de>
Date: Tue, 13 May 2008 09:13:40 +0200
Message-ID: <48293FA4.9050006@dfki.de>
To: EMOXG-public <public-xg-emotion@w3.org>

Kostas, Catherine, all,

Kostas Karpouzis schrieb:
> Catherine Pelachaud wrote:
>> My 2 cents on the definition of min, max.
>> In MPEG-4 facial Animation Parameters have no min-max. Any values are 
>> allowed. The difficulty is to make sure that all MPEG-4 player 
>> interprets the values in a similar manner. To ensure this, detailled 
>> examples are provided as well as animation files that served as test bed.
>> I also like the idea of not having min-max specified. It allows for 
>> much more flexibility and also not to have to define what it is 
>> absolute max 
> Plus, some applications may _want_ to use excessive values (e.g. for 
> eye-popping, cartoon-like animation) In addition to this, in MPEG-4 
> units are inherent in the measurement, since values normalized wrt 
> constant distances; e.g. FAPs related to the eyebrows are normalized 
> using the distance between the eyes and the nose, which (normally) is a 
> constant.

Indeed, we need some sort of reference so that values can be 
interpreted. If I understand correctly, the reference is built into the 
MPEG model as a normalised length defining facial geometry. We could do 
the same, stating that, e.g., 1 (or 100, I don't care) is the maximum 
intensity that is normally expected. If someone wants to exaggerate, 
then we can allow for values higher than 1 -- in such cases, then, it 
should be clear that "unnatural" emotional properties are being modelled.

> Regarding fuzziness and labels, I would think that it's best to leave it 
> up to applications reading and using EMOXG to define them or, in any 
> case, a higher-level structure which again would be app-dependent (an 
> ontology relating feature points to FAPs to expressivity maybe?) In some 
> cases, or for certain users, a specific measurement of eyebrow movement, 
> for instance, may correspond to 'high' activation, while in other cases 
> or contexts, the same measurement may be labeled 'medium'

This example seems to me outside of the EmotionML: you describe the 
question of how to interpret a given expressive behaviour in terms of 

If we go for qualitative scale values, then I would suggest very much 
that we try to agree on a set of labels if possible. Only if there is a 
strong need, we should make the set of labels itself flexibly 
specifiable. Anyway, this is for the spec discussion, not the 
requirements doc.


Dr. Marc Schröder, Senior Researcher at DFKI GmbH
Coordinator EU FP7 Project SEMAINE http://www.semaine-project.eu
Chair W3C Emotion ML Incubator http://www.w3.org/2005/Incubator/emotion
Portal Editor http://emotion-research.net
Team Leader DFKI Speech Group http://mary.dfki.de
Project Leader DFG project PAVOQUE http://mary.dfki.de/pavoque

Homepage: http://www.dfki.de/~schroed
Email: schroed@dfki.de
Phone: +49-681-302-5303
Postal address: DFKI GmbH, Campus D3_2, Stuhlsatzenhausweg 3, D-66123 
Saarbrücken, Germany
Official DFKI coordinates:
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313
Received on Tuesday, 13 May 2008 07:14:20 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:52:15 UTC