W3C home > Mailing lists > Public > public-xg-emotion@w3.org > May 2008

Re: A paper on progress from EMOXG at Emotion&Computing workshop

From: Bill Jarrold <jarrold@AI.SRI.COM>
Date: Wed, 28 May 2008 18:24:10 -0700
Message-Id: <3EC11AAE-34EA-4E43-B5F7-06B44F232D8F@AI.SRI.COM>
Cc: Dylan Evans <evansd66@googlemail.com>, ian@emotionai.com, Marc Schroeder <schroed@dfki.de>, Catherine Pelachaud <catherine.pelachaud@inria.fr>, "Burkhardt, Felix" <Felix.Burkhardt@t-systems.com>, Enrico Zovato <enrico.zovato@loquendo.com>, Kostas Karpouzis <kkarpou@softlab.ece.ntua.gr>, Nestor Garay <nestor.garay@ehu.es>, Idoia Zearreta <icearreta001@ikasle.ehu.es>, Christian Peter <Christian.Peter@igd-r.fraunhofer.de>, public-xg-emotion@w3.org
To: Catherine Pelachaud <pelachaud@iut.univ-paris8.fr>

Yes, indeed.  A characterization of all emotional bodily expressions  
would be a vast possibly never ending enterprise!

This might be an argument in favor of using Owl.

Here is why I say this: Someone out in the vast W3C community  
probably already has created an owl ontology of facial expressions.   
The job of our markup language would be simply to enable a way to  
plug into one of the pre-existing ontologies.  (Btw, swoogle, is a  
search engine that should help you search for such ontologies.  See  
http://swoogle.umbc.edu/).

Alternatively, maybe there is a way that XML can plug into such a pre- 
existing owl ontology?

Alternatively, we might carve out a small portion of bodily  
expression terms and make that a part of our standard?

Are there other classes of alternatives?

Bill

p.s. As a long overdue to do item, I owe the group a response to this  
question: "How OWL address the challenge problem posed by Marc a few  
months back."  I am hoping to get to this in the next few days.

p.p.s. I definitely do not want to be considered an evangelist for  
OWL.  I do know know enough to compare it to the other options, i.e.  
RDF versus XML.


On May 28, 2008, at 4:30 PM, Catherine Pelachaud wrote:

>
> Hi Dylan,
>
> The problem of including facial expression into the language is the  
> exponentiality of things to include: vocal description, emotional  
> gesture, body quality... The quantity of information to  
> characterize bodily expressions of emotions can be very vast.  
> Including them will explode the language!
> Best,
>
> Catherine
>
> Dylan Evans a écrit :
>> Hi Catherine,
>>
>> The precise details of how to encode, say, a smile or a frown  
>> could be
>> left to a standard like MPEG-4 or FACS.  But this would only handle
>> human-like facial expressions.  It wouldn't handle robot-specific
>> expressions such as moving ears, flashing lights, etc.  So we could
>> have some high-level feature in which people could specify the  
>> kind of
>> expression associated with a given emotion (eg. smile/flash blue
>> lights).  If this was a humanlike facial expression, the details  
>> could
>> then be handled by MPEG-4 or FACS (which would take "smile" as input
>> and transform that into specific facial action units etc.).  That's
>> assuming we are interested in the generation of facial expressions in
>> artificial agents.  But we might want to include a facial expression
>> feature in EML so that people or computers who are tagging video data
>> can say what made them infer a particular emotion category without
>> having to go into the details of FACS.
>>
>> I'm just thinking out loud, but it only struck me today that it
>> appears rather inconsistent to include a category for behaviour
>> tendency but not for facial expression.  Almost all the proposed core
>> features deal with what we might call internal aspects of emotion -
>> type of emotion, emotion intensity, appraisal etc.  If we wanted EML
>> to handle just these internal aspects, and let other standards like
>> FACS etc handle external aspects, then it is strange to include an
>> external aspect like action tendency in the current requirements  
>> list.
>>  On the other hand, if we include action tendency in the list, it is
>> strange to exclude other external aspects such as facial expression.
>>
>> Does anyone else feel perplexed by this, or am I on the wrong track?
>>
>> Dylan
>>
>> On Wed, May 28, 2008 at 3:25 PM, Catherine Pelachaud
>> <pelachaud@iut.univ-paris8.fr> wrote:
>>
>>> Dear all,
>>>
>>>
>>>> Expression does now seem odd but again it is very  
>>>> implementational, what
>>>> did we decide on this, my memory is vague?
>>>>
>>> From what I can recall, it has been decided that any visual and  
>>> acoustic
>>> expression of emotion be specified outside of EMOXG. there exist  
>>> already
>>> some standards, such as MPEG-4, H-anim, or widely used annotation  
>>> scheme,
>>> FACS. In the ECA community there are quite a lot of work to  
>>> develop a
>>> 'standard' representation language for behaviors (and another one  
>>> for
>>> communicative functions).
>>>
>>> best,
>>> Catherine
>>>
>>>> Best,
>>>>
>>>> Ian
>>>>
>>>>
>>>> On Wed May 28 2:48 PM , "Dylan Evans" <evansd66@googlemail.com>  
>>>> sent:
>>>> Hi,
>>>>
>>>> I'd be happy to contribute a short discussion of core 5: action
>>>> tendencies, unless Bill or Ian wants to do this (it was either  
>>>> Bill or
>>>> Ian who suggested that this be part of the core, I think). There  
>>>> are
>>>> some interesting difficulties with this requirement. One of them
>>>> concerns the level at which behaviour should be specified;  
>>>> another is
>>>> the dependency of action tendencies on the effectors available  
>>>> to the
>>>> system, which have huge variation. Another is the distinction  
>>>> between
>>>> action tendencies and expression. For example, is the movement of
>>>> wizkid's undefinedheadundefined an action tendency or an  
>>>> expression? See
>>>>
>>>> http://www.wizkid.info/en/page12.xml
>>>>
>>>> Come to think of it, we don't have a category for expressions at  
>>>> all
>>>> in the core requirements. That seems really odd to me now, given  
>>>> that
>>>> we have a category for action tendencies. Some robots express
>>>> emotions by means of different coloured lights, while others do  
>>>> so by
>>>> means of moving their ears, for example, so it would be good to  
>>>> enable
>>>> robotic designers the means to register these possibilities in the
>>>> EML.
>>>>
>>>> Dylan
>>>>
>>>> On Wed, May 28, 2008 at 8:59 AM, Marc Schroeder wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> this email goes to all those who have participated in the  
>>>>> preparation
>>>>> and
>>>>> discussion of the prioritised requirements document [1].
>>>>>
>>>>>
>>>> undefined I think it would be nice to write a short paper on the  
>>>> progress
>>>> we have made
>>>> undefined in the EMOXG, for the workshop undefinedEmotion and
>>>> Computingundefined [2] at the KI2008
>>>>
>>>>> conference. That is a small workshop aimed at promoting  
>>>>> discussion, so
>>>>> bringing in our "2 cents" seems worthwhile.
>>>>>
>>>>>
>>>> undefined Deadline is 6 June; target length is 4-8 pages in  
>>>> Springer LNCS
>>>> format, i.e.
>>>>
>>>>> not much space. Tentative title:
>>>>>
>>>>> "What is most important for an Emotion Markup Language?"
>>>>>
>>>>> The idea would be to report on the result of our priority  
>>>>> discussions. A
>>>>> main section could describe the mandatory requirements in some  
>>>>> detail
>>>>> and
>>>>> the optional ones in less detail; a shorter discussion section  
>>>>> could
>>>>> point
>>>>> out some of the issues that were raised on the mailing list  
>>>>> (scales,
>>>>> intention for state-of-the-art or beyond).
>>>>>
>>>>> Who would be willing to help write the paper? Please also  
>>>>> suggest which
>>>>> section you could contribute to. Active participation would be a
>>>>> precondition for being listed as an author, and we should try  
>>>>> to find an
>>>>> order of authorship that fairly represents the amount of  
>>>>> participation
>>>>> (in
>>>>> the previous discussion and in paper writing).
>>>>>
>>>>> Best wishes,
>>>>> Marc
>>>>>
>>>>>
>>>>>
>>>>>
>>>> undefined [1] http://www.w3.org/2005/Incubator/emotion/XGR- 
>>>> requirements
>>>> undefined [2] http://www.emotion-and-computing.de/
>>>>
>>>>> --
>>>>>
>>>> undefined Dr. Marc Schröder, Senior Researcher at DFKI GmbH
>>>> undefined Coordinator EU FP7 Project SEMAINE http://www.semaine- 
>>>> project.eu
>>>> undefined Chair W3C Emotion ML Incubator
>>>> http://www.w3.org/2005/Incubator/emotion
>>>> undefined Portal Editor http://emotion-research.net
>>>> undefined Team Leader DFKI Speech Group http://mary.dfki.de
>>>> undefined Project Leader DFG project PAVOQUE http://mary.dfki.de/ 
>>>> pavoque
>>>>       undefined Homepage: http://www.dfki.de/~schroed
>>>> undefined Email: schroed@dfki.de
>>>>
>>>>> Phone: +49-681-302-5303
>>>>>
>>>> undefined Postal address: DFKI GmbH, Campus D3_2,  
>>>> Stuhlsatzenhausweg 3,
>>>> D-66123
>>>> undefined Saarbrücken, Germany
>>>>
>>>>> --
>>>>>
>>>> undefined Official DFKI coordinates:
>>>> undefined Deutsches Forschungszentrum fuer Kuenstliche  
>>>> Intelligenz GmbH
>>>> undefined Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
>>>> undefined Geschaeftsfuehrung:
>>>> undefined Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
>>>> undefined Dr. Walter Olthoff
>>>> undefined Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A.  
>>>> Aukes
>>>> undefined Amtsgericht Kaiserslautern, HRB 2313
>>>>
>>>> --
>>>> --------------------------------------------
>>>> Dr. Dylan Evans
>>>> Senior Research Scientist
>>>> Cork Constraint Computation Centre (4C)
>>>> University College Cork,
>>>> Cork, Ireland.
>>>>
>>>> Tel: +353-(0)21-4255408
>>>> Fax: +353-(0)21-4255424
>>>> Email: d.evans@4c.ucc.ie
>>>> Web: http://4c.ucc.ie
>>>> http://www.dylan.org.uk
>>>> --------------------------------------------
>>>>
>>>> -------
>>>> Sent from Orgoo.com <http://www.orgoo.com/Home?referrer=1> - Your
>>>> communications cockpit!
>>>>
>>
>>
>>
>>
Received on Thursday, 29 May 2008 01:33:10 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 29 May 2008 01:33:11 GMT