W3C home > Mailing lists > Public > public-xg-emotion@w3.org > June 2008

Re: action tendencies and expressions

From: Ian Wilson <ian@emotionai.com>
Date: Mon, 02 Jun 2008 04:37:11 -0500
To: Dylan Evans <evansd66@googlemail.com>, Marc Schroeder <schroed@dfki.de>
Cc: Catherine Pelachaud <pelachaud@iut.univ-paris8.fr>, Bill Jarrold <jarrold@ai.sri.com>, Catherine Pelachaud <catherine.pelachaud@inria.fr>, "Burkhardt, Felix" <Felix.Burkhardt@t-systems.com>, Enrico Zovato <enrico.zovato@loquendo.com>, Kostas Karpouzis <kkarpou@softlab.ece.ntua.gr>, Nestor Garay <nestor.garay@ehu.es>, Idoia Zearreta <icearreta001@ikasle.ehu.es>, Christian Peter <Christian.Peter@igd-r.fraunhofer.de>, public-xg-emotion@w3.org
Message-Id: <17861.1212399431@orgoo.com>

Dylan,

Do you want to handle the paper section about action tendencies as you have made some pertinent points.

On the topic, for my system action tendencies are essentially internal values and not the action themselves although the line between internal and external is fuzzy. As a concrete example:

&nbsp;&nbsp;&nbsp; Tendency_Approach_Unknown_Object = 0.77

So this is a high level concept but the actual implementation or action is not specified. This is how I define internal and external, the concept or tendency is internal and the action or behavior is external.

Best,

Ian


 On Thu May 29 1:15 PM  , Marc Schroeder &lt;schroed@dfki.de&gt; sent:
Hi Dylan,
 
 thanks for pushing this point -- it is good to repeat this discussion 
 from last year because it is a crucial issue, and it is good that 
 everybody is aware of the reasons for choices.
 
 The first issue is what we ended up calling &quot;observable behaviour&quot;. It 
 means what would be called &quot;input&quot; in emotion recognition scenarios, and 
 &quot;output&quot; in emotion generation scenarios, i.e. facial expressions, 
 physiological parameters, colours, flashing lights, you name it. This is 
 generically unlimited, and so we decided to consciously push it out of 
 the emotion markup language and simply refer to it through one of the 
 &quot;links to the rest of the world&quot; -- see the third bullet point in [1].
 
 [1] http://www.w3.org/2005/Incubator/emotion/XGR-requirements/#LinkSemantics
 
 The second issue is action tendencies. As I understand it, from the 
 literature where Frijda has introduced the concept, the emphasis is on 
 &quot;tendency&quot;, i.e. an urge, internal to the organism, to perform a certain 
 behaviour. This can be totally suppressed, and not become apparent to 
 the outside world, so it is conceptually different from the &quot;observable 
 behaviour&quot;. One is the *urge* to act, the other is the observable 
 action. As I understand it, Ian was suggesting that it makes sense for a 
 generation system to be able to model such an urge, because it may help 
 the system decide which overt action to generate.
 
 In emotion theory, the action tendency is part of the &quot;multi-faceted 
 syndrome&quot; emotion, just like Appraisals, Physiology, Feelings, and 
 Expressions -- see the Figure in [2], where we have attempted to 
 illustrate how the language takes these various aspects into account.
 
 [2] http://www.w3.org/2005/Incubator/emotion/XGR-emotion/#Assessment
 
 
 Does that make sense?
 
 Best,
 Marc
 
 
 Dylan Evans schrieb:
 &gt; Hi Catherine,
 &gt; 
 &gt; OK, I take your point about that.  But in that case, how can we make a 
 &gt; principled argument for including action tendencies while excluding all 
 &gt; other motor output such as facial expression, vocal signals, gesture, etc?
 &gt; 
 &gt; After all, action tendencies are equally complex, probably more complex 
 &gt; than all other motor output combined.  Conversely, the other forms of 
 &gt; motor output are just the categories you have mentioned - facial 
 &gt; expressions, vocal quality, gesture and body quality - and no more. 
 &gt; 
 &gt; So, in my view, we either exclude all motor output or make some kind of 
 &gt; provision to include all forms of motor output.
 &gt; 
 &gt; Note that for robots, unlike humans, the expression of emotions need not 
 &gt; involve motor output - it could involve flashing lights, for example.
 &gt; 
 &gt; Let's say we are observing a robot flash red lights.  We think that this 
 undefined means that it is sad, but we are not sure.  If we have scope in EML for 
 &gt; encoding this signal, alongside the inferred emotion, then if we 
 &gt; discover later that red lights mean anger, we can easily correct the 
 undefined encoding.  Likewise, mutatis mutandis, for an animal wagging its tail.
 &gt; 
 &gt; Best wishes,
 &gt; 
 &gt; Dylan
 &gt; 
 undefined On Thu, May 29, 2008 at 12:30 AM, Catherine Pelachaud 
 &gt; &gt; wrote:
 &gt; 
 &gt; 
 &gt;     Hi Dylan,
 &gt; 
 &gt;     The problem of including facial expression into the language is the
 undefined     exponentiality of things to include: vocal description, emotional
 &gt;     gesture, body quality... The quantity of information to characterize
 &gt;     bodily expressions of emotions can be very vast. Including them will
 &gt;     explode the language!
 &gt;     Best,
 &gt; 
 &gt;     Catherine
 &gt; 
 undefined     Dylan Evans a &eacute;crit :
 &gt; 
 &gt;         Hi Catherine,
 &gt; 
 &gt;         The precise details of how to encode, say, a smile or a frown
 &gt;         could be
 undefined         left to a standard like MPEG-4 or FACS.  But this would only handle
 &gt;         human-like facial expressions.  It wouldn't handle robot-specific
 &gt;         expressions such as moving ears, flashing lights, etc.  So we could
 &gt;         have some high-level feature in which people could specify the
 &gt;         kind of
 undefined         expression associated with a given emotion (eg. smile/flash blue
 undefined         lights).  If this was a humanlike facial expression, the details
 &gt;         could
 undefined         then be handled by MPEG-4 or FACS (which would take undefinedsmileundefined as input
 &gt;         and transform that into specific facial action units etc.).  That's
 &gt;         assuming we are interested in the generation of facial
 &gt;         expressions in
 &gt;         artificial agents.  But we might want to include a facial expression
 undefined         feature in EML so that people or computers who are tagging video
 &gt;         data
 &gt;         can say what made them infer a particular emotion category without
 undefined         having to go into the details of FACS.
 &gt; 
 &gt;         I'm just thinking out loud, but it only struck me today that it
 &gt;         appears rather inconsistent to include a category for behaviour
 &gt;         tendency but not for facial expression.  Almost all the proposed
 &gt;         core
 &gt;         features deal with what we might call internal aspects of emotion -
 undefined         type of emotion, emotion intensity, appraisal etc.  If we wanted EML
 &gt;         to handle just these internal aspects, and let other standards like
 undefined         FACS etc handle external aspects, then it is strange to include an
 &gt;         external aspect like action tendency in the current requirements
 &gt;         list.
 &gt;          On the other hand, if we include action tendency in the list, it is
 &gt;         strange to exclude other external aspects such as facial expression.
 &gt; 
 &gt;         Does anyone else feel perplexed by this, or am I on the wrong track?
 &gt; 
 &gt;         Dylan
 &gt; 
 undefined         On Wed, May 28, 2008 at 3:25 PM, Catherine Pelachaud
 &gt;         
 &gt;         &gt; wrote:
 &gt;          
 &gt; 
 &gt;             Dear all,
 &gt; 
 &gt;                
 &gt; 
 &gt;                 Expression does now seem odd but again it is very
 undefined                 implementational, what
 &gt;                 did we decide on this, my memory is vague?
 &gt;                      
 &gt; 
 &gt;              &gt;From what I can recall, it has been decided that any
 &gt;             visual and acoustic
 undefined             expression of emotion be specified outside of EMOXG. there
 &gt;             exist already
 undefined             some standards, such as MPEG-4, H-anim, or widely used
 &gt;             annotation scheme,
 undefined             FACS. In the ECA community there are quite a lot of work to
 &gt;             develop a
 undefined             'standard' representation language for behaviors (and
 &gt;             another one for
 &gt;             communicative functions).
 &gt; 
 &gt;             best,
 &gt;             Catherine
 &gt;                
 &gt; 
 &gt;                 Best,
 &gt; 
 &gt;                 Ian
 &gt; 
 &gt; 
 &gt;                 On Wed May 28 2:48 PM , &quot;Dylan Evans&quot;
 &gt;                 
 &gt;                 &gt; sent:
 &gt;                 Hi,
 &gt; 
 &gt;                 I'd be happy to contribute a short discussion of core 5:
 &gt;                 action
 &gt;                 tendencies, unless Bill or Ian wants to do this (it was
 &gt;                 either Bill or
 &gt;                 Ian who suggested that this be part of the core, I
 &gt;                 think). There are
 &gt;                 some interesting difficulties with this requirement. One
 &gt;                 of them
 &gt;                 concerns the level at which behaviour should be
 &gt;                 specified; another is
 &gt;                 the dependency of action tendencies on the effectors
 &gt;                 available to the
 &gt;                 system, which have huge variation. Another is the
 &gt;                 distinction between
 &gt;                 action tendencies and expression. For example, is the
 &gt;                 movement of
 undefined                 wizkid's undefinedheadundefined an action tendency or an
 &gt;                 expression? See
 &gt; 
 undefined                 http://www.wizkid.info/en/page12.xml
 &gt; 
 &gt;                 Come to think of it, we don't have a category for
 &gt;                 expressions at all
 &gt;                 in the core requirements. That seems really odd to me
 &gt;                 now, given that
 &gt;                 we have a category for action tendencies. Some robots
 &gt;                 express
 &gt;                 emotions by means of different coloured lights, while
 &gt;                 others do so by
 &gt;                 means of moving their ears, for example, so it would be
 &gt;                 good to enable
 &gt;                 robotic designers the means to register these
 &gt;                 possibilities in the
 undefined                 EML.
 &gt; 
 &gt;                 Dylan
 &gt; 
 &gt;                 On Wed, May 28, 2008 at 8:59 AM, Marc Schroeder wrote:
 &gt;                      
 &gt; 
 &gt;                     Hi,
 &gt; 
 &gt;                     this email goes to all those who have participated
 &gt;                     in the preparation
 &gt;                     and
 &gt;                     discussion of the prioritised requirements document [1].
 &gt; 
 &gt;                            
 &gt; 
 &gt;                 undefined I think it would be nice to write a short
 &gt;                 paper on the progress
 undefined                 we have made
 undefined                 undefined in the EMOXG, for the workshop
 undefined                 undefinedEmotion and
 undefined                 Computingundefined [2] at the KI2008
 &gt;                      
 &gt; 
 &gt;                     conference. That is a small workshop aimed at
 &gt;                     promoting discussion, so
 &gt;                     bringing in our &quot;2 cents&quot; seems worthwhile.
 &gt; 
 &gt;                            
 &gt; 
 &gt;                 undefined Deadline is 6 June; target length is 4-8 pages
 undefined                 in Springer LNCS
 &gt;                 format, i.e.
 &gt;                      
 &gt; 
 &gt;                     not much space. Tentative title:
 &gt; 
 &gt;                     &quot;What is most important for an Emotion Markup Language?&quot;
 &gt; 
 &gt;                     The idea would be to report on the result of our
 &gt;                     priority discussions. A
 &gt;                     main section could describe the mandatory
 &gt;                     requirements in some detail
 &gt;                     and
 &gt;                     the optional ones in less detail; a shorter
 &gt;                     discussion section could
 &gt;                     point
 &gt;                     out some of the issues that were raised on the
 &gt;                     mailing list (scales,
 &gt;                     intention for state-of-the-art or beyond).
 &gt; 
 &gt;                     Who would be willing to help write the paper? Please
 &gt;                     also suggest which
 &gt;                     section you could contribute to. Active
 &gt;                     participation would be a
 &gt;                     precondition for being listed as an author, and we
 &gt;                     should try to find an
 &gt;                     order of authorship that fairly represents the
 &gt;                     amount of participation
 &gt;                     (in
 &gt;                     the previous discussion and in paper writing).
 &gt; 
 &gt;                     Best wishes,
 &gt;                     Marc
 &gt; 
 &gt; 
 &gt; 
 &gt;                            
 &gt; 
 &gt;                 undefined [1]
 undefined                 http://www.w3.org/2005/Incubator/emotion/XGR-requirements
 undefined                 undefined [2] http://www.emotion-and-computing.de/
 &gt;                      
 &gt; 
 &gt;                     --
 &gt;                            
 &gt; 
 undefined                 undefined Dr. Marc Schr&ouml;der, Senior Researcher at DFKI GmbH
 undefined                 undefined Coordinator EU FP7 Project SEMAINE
 undefined                 http://www.semaine-project.eu
 &gt;                 undefined Chair W3C Emotion ML Incubator
 undefined                 http://www.w3.org/2005/Incubator/emotion
 undefined                 undefined Portal Editor http://emotion-research.net
 undefined                 undefined Team Leader DFKI Speech Group http://mary.dfki.de
 undefined                 undefined Project Leader DFG project PAVOQUE
 undefined                 http://mary.dfki.de/pavoque
 undefined                      undefined Homepage: http://www.dfki.de/~schroed
 &gt;                 
 undefined                 undefined Email: schroed@dfki.de 
 &gt;                      
 &gt; 
 &gt;                     Phone: +49-681-302-5303
 &gt;                            
 &gt; 
 undefined                 undefined Postal address: DFKI GmbH, Campus D3_2,
 undefined                 Stuhlsatzenhausweg 3,
 &gt;                 D-66123
 undefined                 undefined Saarbr&uuml;cken, Germany
 &gt;                      
 &gt; 
 &gt;                     --
 &gt;                            
 &gt; 
 undefined                 undefined Official DFKI coordinates:
 undefined                 undefined Deutsches Forschungszentrum fuer Kuenstliche
 undefined                 Intelligenz GmbH
 undefined                 undefined Trippstadter Strasse 122, D-67663
 undefined                 Kaiserslautern, Germany
 undefined                 undefined Geschaeftsfuehrung:
 undefined                 undefined Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
 undefined                 (Vorsitzender)
 undefined                 undefined Dr. Walter Olthoff
 undefined                 undefined Vorsitzender des Aufsichtsrats: Prof. Dr. h.c.
 undefined                 Hans A. Aukes
 undefined                 undefined Amtsgericht Kaiserslautern, HRB 2313
 &gt;                      
 &gt;                 --
 &gt;                 --------------------------------------------
 &gt;                 Dr. Dylan Evans
 &gt;                 Senior Research Scientist
 &gt;                 Cork Constraint Computation Centre (4C)
 &gt;                 University College Cork,
 &gt;                 Cork, Ireland.
 &gt; 
 &gt;                 Tel: +353-(0)21-4255408
 &gt;                 Fax: +353-(0)21-4255424
 undefined                 Email: d.evans@4c.ucc.ie 
 undefined                 Web: http://4c.ucc.ie
 undefined                 http://www.dylan.org.uk
 &gt;                 --------------------------------------------
 &gt; 
 &gt;                 -------
 undefined                 Sent from Orgoo.com
 &gt;                  - Your
 &gt;                 communications cockpit!
 &gt;                      
 &gt; 
 &gt; 
 &gt; 
 &gt; 
 &gt;          
 &gt; 
 &gt; 
 &gt; 
 &gt; 
 &gt; -- 
 &gt; --------------------------------------------
 &gt; Dr. Dylan Evans
 &gt; Senior Research Scientist
 &gt; Cork Constraint Computation Centre (4C)
 &gt; University College Cork,
 &gt; Cork, Ireland.
 &gt; 
 &gt; Tel: +353-(0)21-4255408
 &gt; Fax: +353-(0)21-4255424
 undefined Email: d.evans@4c.ucc.ie 
 undefined Web: http://4c.ucc.ie
 undefined http://www.dylan.org.uk
 &gt; --------------------------------------------
 
 -- 
 Dr. Marc Schr&ouml;der, Senior Researcher at DFKI GmbH
 Coordinator EU FP7 Project SEMAINE http://www.semaine-project.eu
 Chair W3C Emotion ML Incubator http://www.w3.org/2005/Incubator/emotion
 Portal Editor http://emotion-research.net
 Team Leader DFKI Speech Group http://mary.dfki.de
 Project Leader DFG project PAVOQUE http://mary.dfki.de/pavoque
 
 Homepage: http://www.dfki.de/~schroed
 Email: schroed@dfki.de
 Phone: +49-681-302-5303
 Postal address: DFKI GmbH, Campus D3_2, Stuhlsatzenhausweg 3, D-66123 
 Saarbr&uuml;cken, Germany
 --
 Official DFKI coordinates:
 Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
 Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
 Geschaeftsfuehrung:
 Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
 Dr. Walter Olthoff
 Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
 Amtsgericht Kaiserslautern, HRB 2313
 

-------
Sent from Orgoo.com - Your communications cockpit!
Received on Monday, 2 June 2008 09:37:59 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 2 June 2008 09:38:00 GMT