Re: action tendencies and expressions

Hi Dylan,

thanks for pushing this point -- it is good to repeat this discussion 
from last year because it is a crucial issue, and it is good that 
everybody is aware of the reasons for choices.

The first issue is what we ended up calling "observable behaviour". It 
means what would be called "input" in emotion recognition scenarios, and 
"output" in emotion generation scenarios, i.e. facial expressions, 
physiological parameters, colours, flashing lights, you name it. This is 
generically unlimited, and so we decided to consciously push it out of 
the emotion markup language and simply refer to it through one of the 
"links to the rest of the world" -- see the third bullet point in [1].

[1] http://www.w3.org/2005/Incubator/emotion/XGR-requirements/#LinkSemantics

The second issue is action tendencies. As I understand it, from the 
literature where Frijda has introduced the concept, the emphasis is on 
"tendency", i.e. an urge, internal to the organism, to perform a certain 
behaviour. This can be totally suppressed, and not become apparent to 
the outside world, so it is conceptually different from the "observable 
behaviour". One is the *urge* to act, the other is the observable 
action. As I understand it, Ian was suggesting that it makes sense for a 
generation system to be able to model such an urge, because it may help 
the system decide which overt action to generate.

In emotion theory, the action tendency is part of the "multi-faceted 
syndrome" emotion, just like Appraisals, Physiology, Feelings, and 
Expressions -- see the Figure in [2], where we have attempted to 
illustrate how the language takes these various aspects into account.

[2] http://www.w3.org/2005/Incubator/emotion/XGR-emotion/#Assessment


Does that make sense?

Best,
Marc


Dylan Evans schrieb:
> Hi Catherine,
> 
> OK, I take your point about that.  But in that case, how can we make a 
> principled argument for including action tendencies while excluding all 
> other motor output such as facial expression, vocal signals, gesture, etc?
> 
> After all, action tendencies are equally complex, probably more complex 
> than all other motor output combined.  Conversely, the other forms of 
> motor output are just the categories you have mentioned - facial 
> expressions, vocal quality, gesture and body quality - and no more. 
> 
> So, in my view, we either exclude all motor output or make some kind of 
> provision to include all forms of motor output.
> 
> Note that for robots, unlike humans, the expression of emotions need not 
> involve motor output - it could involve flashing lights, for example.
> 
> Let's say we are observing a robot flash red lights.  We think that this 
> means that it is sad, but we are not sure.  If we have scope in EML for 
> encoding this signal, alongside the inferred emotion, then if we 
> discover later that red lights mean anger, we can easily correct the 
> encoding.  Likewise, mutatis mutandis, for an animal wagging its tail.
> 
> Best wishes,
> 
> Dylan
> 
> On Thu, May 29, 2008 at 12:30 AM, Catherine Pelachaud 
> <pelachaud@iut.univ-paris8.fr <mailto:pelachaud@iut.univ-paris8.fr>> wrote:
> 
> 
>     Hi Dylan,
> 
>     The problem of including facial expression into the language is the
>     exponentiality of things to include: vocal description, emotional
>     gesture, body quality... The quantity of information to characterize
>     bodily expressions of emotions can be very vast. Including them will
>     explode the language!
>     Best,
> 
>     Catherine
> 
>     Dylan Evans a écrit :
> 
>         Hi Catherine,
> 
>         The precise details of how to encode, say, a smile or a frown
>         could be
>         left to a standard like MPEG-4 or FACS.  But this would only handle
>         human-like facial expressions.  It wouldn't handle robot-specific
>         expressions such as moving ears, flashing lights, etc.  So we could
>         have some high-level feature in which people could specify the
>         kind of
>         expression associated with a given emotion (eg. smile/flash blue
>         lights).  If this was a humanlike facial expression, the details
>         could
>         then be handled by MPEG-4 or FACS (which would take "smile" as input
>         and transform that into specific facial action units etc.).  That's
>         assuming we are interested in the generation of facial
>         expressions in
>         artificial agents.  But we might want to include a facial expression
>         feature in EML so that people or computers who are tagging video
>         data
>         can say what made them infer a particular emotion category without
>         having to go into the details of FACS.
> 
>         I'm just thinking out loud, but it only struck me today that it
>         appears rather inconsistent to include a category for behaviour
>         tendency but not for facial expression.  Almost all the proposed
>         core
>         features deal with what we might call internal aspects of emotion -
>         type of emotion, emotion intensity, appraisal etc.  If we wanted EML
>         to handle just these internal aspects, and let other standards like
>         FACS etc handle external aspects, then it is strange to include an
>         external aspect like action tendency in the current requirements
>         list.
>          On the other hand, if we include action tendency in the list, it is
>         strange to exclude other external aspects such as facial expression.
> 
>         Does anyone else feel perplexed by this, or am I on the wrong track?
> 
>         Dylan
> 
>         On Wed, May 28, 2008 at 3:25 PM, Catherine Pelachaud
>         <pelachaud@iut.univ-paris8.fr
>         <mailto:pelachaud@iut.univ-paris8.fr>> wrote:
>          
> 
>             Dear all,
> 
>                
> 
>                 Expression does now seem odd but again it is very
>                 implementational, what
>                 did we decide on this, my memory is vague?
>                      
> 
>              >From what I can recall, it has been decided that any
>             visual and acoustic
>             expression of emotion be specified outside of EMOXG. there
>             exist already
>             some standards, such as MPEG-4, H-anim, or widely used
>             annotation scheme,
>             FACS. In the ECA community there are quite a lot of work to
>             develop a
>             'standard' representation language for behaviors (and
>             another one for
>             communicative functions).
> 
>             best,
>             Catherine
>                
> 
>                 Best,
> 
>                 Ian
> 
> 
>                 On Wed May 28 2:48 PM , "Dylan Evans"
>                 <evansd66@googlemail.com
>                 <mailto:evansd66@googlemail.com>> sent:
>                 Hi,
> 
>                 I'd be happy to contribute a short discussion of core 5:
>                 action
>                 tendencies, unless Bill or Ian wants to do this (it was
>                 either Bill or
>                 Ian who suggested that this be part of the core, I
>                 think). There are
>                 some interesting difficulties with this requirement. One
>                 of them
>                 concerns the level at which behaviour should be
>                 specified; another is
>                 the dependency of action tendencies on the effectors
>                 available to the
>                 system, which have huge variation. Another is the
>                 distinction between
>                 action tendencies and expression. For example, is the
>                 movement of
>                 wizkid's undefinedheadundefined an action tendency or an
>                 expression? See
> 
>                 http://www.wizkid.info/en/page12.xml
> 
>                 Come to think of it, we don't have a category for
>                 expressions at all
>                 in the core requirements. That seems really odd to me
>                 now, given that
>                 we have a category for action tendencies. Some robots
>                 express
>                 emotions by means of different coloured lights, while
>                 others do so by
>                 means of moving their ears, for example, so it would be
>                 good to enable
>                 robotic designers the means to register these
>                 possibilities in the
>                 EML.
> 
>                 Dylan
> 
>                 On Wed, May 28, 2008 at 8:59 AM, Marc Schroeder wrote:
>                      
> 
>                     Hi,
> 
>                     this email goes to all those who have participated
>                     in the preparation
>                     and
>                     discussion of the prioritised requirements document [1].
> 
>                            
> 
>                 undefined I think it would be nice to write a short
>                 paper on the progress
>                 we have made
>                 undefined in the EMOXG, for the workshop
>                 undefinedEmotion and
>                 Computingundefined [2] at the KI2008
>                      
> 
>                     conference. That is a small workshop aimed at
>                     promoting discussion, so
>                     bringing in our "2 cents" seems worthwhile.
> 
>                            
> 
>                 undefined Deadline is 6 June; target length is 4-8 pages
>                 in Springer LNCS
>                 format, i.e.
>                      
> 
>                     not much space. Tentative title:
> 
>                     "What is most important for an Emotion Markup Language?"
> 
>                     The idea would be to report on the result of our
>                     priority discussions. A
>                     main section could describe the mandatory
>                     requirements in some detail
>                     and
>                     the optional ones in less detail; a shorter
>                     discussion section could
>                     point
>                     out some of the issues that were raised on the
>                     mailing list (scales,
>                     intention for state-of-the-art or beyond).
> 
>                     Who would be willing to help write the paper? Please
>                     also suggest which
>                     section you could contribute to. Active
>                     participation would be a
>                     precondition for being listed as an author, and we
>                     should try to find an
>                     order of authorship that fairly represents the
>                     amount of participation
>                     (in
>                     the previous discussion and in paper writing).
> 
>                     Best wishes,
>                     Marc
> 
> 
> 
>                            
> 
>                 undefined [1]
>                 http://www.w3.org/2005/Incubator/emotion/XGR-requirements
>                 undefined [2] http://www.emotion-and-computing.de/
>                      
> 
>                     --
>                            
> 
>                 undefined Dr. Marc Schröder, Senior Researcher at DFKI GmbH
>                 undefined Coordinator EU FP7 Project SEMAINE
>                 http://www.semaine-project.eu
>                 undefined Chair W3C Emotion ML Incubator
>                 http://www.w3.org/2005/Incubator/emotion
>                 undefined Portal Editor http://emotion-research.net
>                 undefined Team Leader DFKI Speech Group http://mary.dfki.de
>                 undefined Project Leader DFG project PAVOQUE
>                 http://mary.dfki.de/pavoque
>                      undefined Homepage: http://www.dfki.de/~schroed
>                 <http://www.dfki.de/%7Eschroed>
>                 undefined Email: schroed@dfki.de <mailto:schroed@dfki.de>
>                      
> 
>                     Phone: +49-681-302-5303
>                            
> 
>                 undefined Postal address: DFKI GmbH, Campus D3_2,
>                 Stuhlsatzenhausweg 3,
>                 D-66123
>                 undefined Saarbrücken, Germany
>                      
> 
>                     --
>                            
> 
>                 undefined Official DFKI coordinates:
>                 undefined Deutsches Forschungszentrum fuer Kuenstliche
>                 Intelligenz GmbH
>                 undefined Trippstadter Strasse 122, D-67663
>                 Kaiserslautern, Germany
>                 undefined Geschaeftsfuehrung:
>                 undefined Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
>                 (Vorsitzender)
>                 undefined Dr. Walter Olthoff
>                 undefined Vorsitzender des Aufsichtsrats: Prof. Dr. h.c.
>                 Hans A. Aukes
>                 undefined Amtsgericht Kaiserslautern, HRB 2313
>                      
>                 --
>                 --------------------------------------------
>                 Dr. Dylan Evans
>                 Senior Research Scientist
>                 Cork Constraint Computation Centre (4C)
>                 University College Cork,
>                 Cork, Ireland.
> 
>                 Tel: +353-(0)21-4255408
>                 Fax: +353-(0)21-4255424
>                 Email: d.evans@4c.ucc.ie <mailto:d.evans@4c.ucc.ie>
>                 Web: http://4c.ucc.ie
>                 http://www.dylan.org.uk
>                 --------------------------------------------
> 
>                 -------
>                 Sent from Orgoo.com
>                 <http://www.orgoo.com/Home?referrer=1> - Your
>                 communications cockpit!
>                      
> 
> 
> 
> 
>          
> 
> 
> 
> 
> -- 
> --------------------------------------------
> Dr. Dylan Evans
> Senior Research Scientist
> Cork Constraint Computation Centre (4C)
> University College Cork,
> Cork, Ireland.
> 
> Tel: +353-(0)21-4255408
> Fax: +353-(0)21-4255424
> Email: d.evans@4c.ucc.ie <mailto:d.evans@4c.ucc.ie>
> Web: http://4c.ucc.ie
> http://www.dylan.org.uk
> --------------------------------------------

-- 
Dr. Marc Schröder, Senior Researcher at DFKI GmbH
Coordinator EU FP7 Project SEMAINE http://www.semaine-project.eu
Chair W3C Emotion ML Incubator http://www.w3.org/2005/Incubator/emotion
Portal Editor http://emotion-research.net
Team Leader DFKI Speech Group http://mary.dfki.de
Project Leader DFG project PAVOQUE http://mary.dfki.de/pavoque

Homepage: http://www.dfki.de/~schroed
Email: schroed@dfki.de
Phone: +49-681-302-5303
Postal address: DFKI GmbH, Campus D3_2, Stuhlsatzenhausweg 3, D-66123 
Saarbrücken, Germany
--
Official DFKI coordinates:
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
Geschaeftsfuehrung:
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313

Received on Thursday, 29 May 2008 12:16:11 UTC