Re: action tendencies and expressions

Dylan,

Do you want to handle the paper section about action tendencies as you have made some pertinent points.

On the topic, for my system action tendencies are essentially internal values and not the action themselves although the line between internal and external is fuzzy. As a concrete example:

    Tendency_Approach_Unknown_Object = 0.77

So this is a high level concept but the actual implementation or action is not specified. This is how I define internal and external, the concept or tendency is internal and the action or behavior is external.

Best,

Ian


 On Thu May 29 1:15 PM  , Marc Schroeder <schroed@dfki.de> sent:
Hi Dylan,
 
 thanks for pushing this point -- it is good to repeat this discussion 
 from last year because it is a crucial issue, and it is good that 
 everybody is aware of the reasons for choices.
 
 The first issue is what we ended up calling "observable behaviour". It 
 means what would be called "input" in emotion recognition scenarios, and 
 "output" in emotion generation scenarios, i.e. facial expressions, 
 physiological parameters, colours, flashing lights, you name it. This is 
 generically unlimited, and so we decided to consciously push it out of 
 the emotion markup language and simply refer to it through one of the 
 "links to the rest of the world" -- see the third bullet point in [1].
 
 [1] http://www.w3.org/2005/Incubator/emotion/XGR-requirements/#LinkSemantics
 
 The second issue is action tendencies. As I understand it, from the 
 literature where Frijda has introduced the concept, the emphasis is on 
 "tendency", i.e. an urge, internal to the organism, to perform a certain 
 behaviour. This can be totally suppressed, and not become apparent to 
 the outside world, so it is conceptually different from the "observable 
 behaviour". One is the *urge* to act, the other is the observable 
 action. As I understand it, Ian was suggesting that it makes sense for a 
 generation system to be able to model such an urge, because it may help 
 the system decide which overt action to generate.
 
 In emotion theory, the action tendency is part of the "multi-faceted 
 syndrome" emotion, just like Appraisals, Physiology, Feelings, and 
 Expressions -- see the Figure in [2], where we have attempted to 
 illustrate how the language takes these various aspects into account.
 
 [2] http://www.w3.org/2005/Incubator/emotion/XGR-emotion/#Assessment
 
 
 Does that make sense?
 
 Best,
 Marc
 
 
 Dylan Evans schrieb:
 > Hi Catherine,
 > 
 > OK, I take your point about that.  But in that case, how can we make a 
 > principled argument for including action tendencies while excluding all 
 > other motor output such as facial expression, vocal signals, gesture, etc?
 > 
 > After all, action tendencies are equally complex, probably more complex 
 > than all other motor output combined.  Conversely, the other forms of 
 > motor output are just the categories you have mentioned - facial 
 > expressions, vocal quality, gesture and body quality - and no more. 
 > 
 > So, in my view, we either exclude all motor output or make some kind of 
 > provision to include all forms of motor output.
 > 
 > Note that for robots, unlike humans, the expression of emotions need not 
 > involve motor output - it could involve flashing lights, for example.
 > 
 > Let's say we are observing a robot flash red lights.  We think that this 
 undefined means that it is sad, but we are not sure.  If we have scope in EML for 
 > encoding this signal, alongside the inferred emotion, then if we 
 > discover later that red lights mean anger, we can easily correct the 
 undefined encoding.  Likewise, mutatis mutandis, for an animal wagging its tail.
 > 
 > Best wishes,
 > 
 > Dylan
 > 
 undefined On Thu, May 29, 2008 at 12:30 AM, Catherine Pelachaud 
 > > wrote:
 > 
 > 
 >     Hi Dylan,
 > 
 >     The problem of including facial expression into the language is the
 undefined     exponentiality of things to include: vocal description, emotional
 >     gesture, body quality... The quantity of information to characterize
 >     bodily expressions of emotions can be very vast. Including them will
 >     explode the language!
 >     Best,
 > 
 >     Catherine
 > 
 undefined     Dylan Evans a écrit :
 > 
 >         Hi Catherine,
 > 
 >         The precise details of how to encode, say, a smile or a frown
 >         could be
 undefined         left to a standard like MPEG-4 or FACS.  But this would only handle
 >         human-like facial expressions.  It wouldn't handle robot-specific
 >         expressions such as moving ears, flashing lights, etc.  So we could
 >         have some high-level feature in which people could specify the
 >         kind of
 undefined         expression associated with a given emotion (eg. smile/flash blue
 undefined         lights).  If this was a humanlike facial expression, the details
 >         could
 undefined         then be handled by MPEG-4 or FACS (which would take undefinedsmileundefined as input
 >         and transform that into specific facial action units etc.).  That's
 >         assuming we are interested in the generation of facial
 >         expressions in
 >         artificial agents.  But we might want to include a facial expression
 undefined         feature in EML so that people or computers who are tagging video
 >         data
 >         can say what made them infer a particular emotion category without
 undefined         having to go into the details of FACS.
 > 
 >         I'm just thinking out loud, but it only struck me today that it
 >         appears rather inconsistent to include a category for behaviour
 >         tendency but not for facial expression.  Almost all the proposed
 >         core
 >         features deal with what we might call internal aspects of emotion -
 undefined         type of emotion, emotion intensity, appraisal etc.  If we wanted EML
 >         to handle just these internal aspects, and let other standards like
 undefined         FACS etc handle external aspects, then it is strange to include an
 >         external aspect like action tendency in the current requirements
 >         list.
 >          On the other hand, if we include action tendency in the list, it is
 >         strange to exclude other external aspects such as facial expression.
 > 
 >         Does anyone else feel perplexed by this, or am I on the wrong track?
 > 
 >         Dylan
 > 
 undefined         On Wed, May 28, 2008 at 3:25 PM, Catherine Pelachaud
 >         
 >         > wrote:
 >          
 > 
 >             Dear all,
 > 
 >                
 > 
 >                 Expression does now seem odd but again it is very
 undefined                 implementational, what
 >                 did we decide on this, my memory is vague?
 >                      
 > 
 >              >From what I can recall, it has been decided that any
 >             visual and acoustic
 undefined             expression of emotion be specified outside of EMOXG. there
 >             exist already
 undefined             some standards, such as MPEG-4, H-anim, or widely used
 >             annotation scheme,
 undefined             FACS. In the ECA community there are quite a lot of work to
 >             develop a
 undefined             'standard' representation language for behaviors (and
 >             another one for
 >             communicative functions).
 > 
 >             best,
 >             Catherine
 >                
 > 
 >                 Best,
 > 
 >                 Ian
 > 
 > 
 >                 On Wed May 28 2:48 PM , "Dylan Evans"
 >                 
 >                 > sent:
 >                 Hi,
 > 
 >                 I'd be happy to contribute a short discussion of core 5:
 >                 action
 >                 tendencies, unless Bill or Ian wants to do this (it was
 >                 either Bill or
 >                 Ian who suggested that this be part of the core, I
 >                 think). There are
 >                 some interesting difficulties with this requirement. One
 >                 of them
 >                 concerns the level at which behaviour should be
 >                 specified; another is
 >                 the dependency of action tendencies on the effectors
 >                 available to the
 >                 system, which have huge variation. Another is the
 >                 distinction between
 >                 action tendencies and expression. For example, is the
 >                 movement of
 undefined                 wizkid's undefinedheadundefined an action tendency or an
 >                 expression? See
 > 
 undefined                 http://www.wizkid.info/en/page12.xml
 > 
 >                 Come to think of it, we don't have a category for
 >                 expressions at all
 >                 in the core requirements. That seems really odd to me
 >                 now, given that
 >                 we have a category for action tendencies. Some robots
 >                 express
 >                 emotions by means of different coloured lights, while
 >                 others do so by
 >                 means of moving their ears, for example, so it would be
 >                 good to enable
 >                 robotic designers the means to register these
 >                 possibilities in the
 undefined                 EML.
 > 
 >                 Dylan
 > 
 >                 On Wed, May 28, 2008 at 8:59 AM, Marc Schroeder wrote:
 >                      
 > 
 >                     Hi,
 > 
 >                     this email goes to all those who have participated
 >                     in the preparation
 >                     and
 >                     discussion of the prioritised requirements document [1].
 > 
 >                            
 > 
 >                 undefined I think it would be nice to write a short
 >                 paper on the progress
 undefined                 we have made
 undefined                 undefined in the EMOXG, for the workshop
 undefined                 undefinedEmotion and
 undefined                 Computingundefined [2] at the KI2008
 >                      
 > 
 >                     conference. That is a small workshop aimed at
 >                     promoting discussion, so
 >                     bringing in our "2 cents" seems worthwhile.
 > 
 >                            
 > 
 >                 undefined Deadline is 6 June; target length is 4-8 pages
 undefined                 in Springer LNCS
 >                 format, i.e.
 >                      
 > 
 >                     not much space. Tentative title:
 > 
 >                     "What is most important for an Emotion Markup Language?"
 > 
 >                     The idea would be to report on the result of our
 >                     priority discussions. A
 >                     main section could describe the mandatory
 >                     requirements in some detail
 >                     and
 >                     the optional ones in less detail; a shorter
 >                     discussion section could
 >                     point
 >                     out some of the issues that were raised on the
 >                     mailing list (scales,
 >                     intention for state-of-the-art or beyond).
 > 
 >                     Who would be willing to help write the paper? Please
 >                     also suggest which
 >                     section you could contribute to. Active
 >                     participation would be a
 >                     precondition for being listed as an author, and we
 >                     should try to find an
 >                     order of authorship that fairly represents the
 >                     amount of participation
 >                     (in
 >                     the previous discussion and in paper writing).
 > 
 >                     Best wishes,
 >                     Marc
 > 
 > 
 > 
 >                            
 > 
 >                 undefined [1]
 undefined                 http://www.w3.org/2005/Incubator/emotion/XGR-requirements
 undefined                 undefined [2] http://www.emotion-and-computing.de/
 >                      
 > 
 >                     --
 >                            
 > 
 undefined                 undefined Dr. Marc Schröder, Senior Researcher at DFKI GmbH
 undefined                 undefined Coordinator EU FP7 Project SEMAINE
 undefined                 http://www.semaine-project.eu
 >                 undefined Chair W3C Emotion ML Incubator
 undefined                 http://www.w3.org/2005/Incubator/emotion
 undefined                 undefined Portal Editor http://emotion-research.net
 undefined                 undefined Team Leader DFKI Speech Group http://mary.dfki.de
 undefined                 undefined Project Leader DFG project PAVOQUE
 undefined                 http://mary.dfki.de/pavoque
 undefined                      undefined Homepage: http://www.dfki.de/~schroed
 >                 
 undefined                 undefined Email: schroed@dfki.de 
 >                      
 > 
 >                     Phone: +49-681-302-5303
 >                            
 > 
 undefined                 undefined Postal address: DFKI GmbH, Campus D3_2,
 undefined                 Stuhlsatzenhausweg 3,
 >                 D-66123
 undefined                 undefined Saarbrücken, Germany
 >                      
 > 
 >                     --
 >                            
 > 
 undefined                 undefined Official DFKI coordinates:
 undefined                 undefined Deutsches Forschungszentrum fuer Kuenstliche
 undefined                 Intelligenz GmbH
 undefined                 undefined Trippstadter Strasse 122, D-67663
 undefined                 Kaiserslautern, Germany
 undefined                 undefined Geschaeftsfuehrung:
 undefined                 undefined Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster
 undefined                 (Vorsitzender)
 undefined                 undefined Dr. Walter Olthoff
 undefined                 undefined Vorsitzender des Aufsichtsrats: Prof. Dr. h.c.
 undefined                 Hans A. Aukes
 undefined                 undefined Amtsgericht Kaiserslautern, HRB 2313
 >                      
 >                 --
 >                 --------------------------------------------
 >                 Dr. Dylan Evans
 >                 Senior Research Scientist
 >                 Cork Constraint Computation Centre (4C)
 >                 University College Cork,
 >                 Cork, Ireland.
 > 
 >                 Tel: +353-(0)21-4255408
 >                 Fax: +353-(0)21-4255424
 undefined                 Email: d.evans@4c.ucc.ie 
 undefined                 Web: http://4c.ucc.ie
 undefined                 http://www.dylan.org.uk
 >                 --------------------------------------------
 > 
 >                 -------
 undefined                 Sent from Orgoo.com
 >                  - Your
 >                 communications cockpit!
 >                      
 > 
 > 
 > 
 > 
 >          
 > 
 > 
 > 
 > 
 > -- 
 > --------------------------------------------
 > Dr. Dylan Evans
 > Senior Research Scientist
 > Cork Constraint Computation Centre (4C)
 > University College Cork,
 > Cork, Ireland.
 > 
 > Tel: +353-(0)21-4255408
 > Fax: +353-(0)21-4255424
 undefined Email: d.evans@4c.ucc.ie 
 undefined Web: http://4c.ucc.ie
 undefined http://www.dylan.org.uk
 > --------------------------------------------
 
 -- 
 Dr. Marc Schröder, Senior Researcher at DFKI GmbH
 Coordinator EU FP7 Project SEMAINE http://www.semaine-project.eu
 Chair W3C Emotion ML Incubator http://www.w3.org/2005/Incubator/emotion
 Portal Editor http://emotion-research.net
 Team Leader DFKI Speech Group http://mary.dfki.de
 Project Leader DFG project PAVOQUE http://mary.dfki.de/pavoque
 
 Homepage: http://www.dfki.de/~schroed
 Email: schroed@dfki.de
 Phone: +49-681-302-5303
 Postal address: DFKI GmbH, Campus D3_2, Stuhlsatzenhausweg 3, D-66123 
 Saarbrücken, Germany
 --
 Official DFKI coordinates:
 Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
 Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
 Geschaeftsfuehrung:
 Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
 Dr. Walter Olthoff
 Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
 Amtsgericht Kaiserslautern, HRB 2313
 

-------
Sent from Orgoo.com - Your communications cockpit!

Received on Monday, 2 June 2008 09:37:59 UTC