[Fwd: Re: [EMOXG] Updated discussion document, feedback requested]

(forwarding this reply from Catherine to the list)

-------- Original-Nachricht --------
Betreff: Re: [EMOXG] Updated discussion document, feedback requested
Datum: Tue, 21 Oct 2008 14:23:18 +0200
Von: Catherine Pelachaud <catherine.pelachaud@telecom-paristech.fr>
An: Marc Schroeder <schroed@dfki.de>
Referenzen: <48FC709B.9030105@dfki.de>

Hi,

I don't think my mail arrives to the EmoXG groups as I have a new one.
I had sent some comments before the previous EmoXG meeting and I have
just noticed that my email did not arrive.
I am sending you my previous comments. Below I tried to give my thoughts
on your questions.

Core 1: I would prefer to use a generic word 'affect' with different
types (emotion, mood, feeling, ...).

Core 3: Isn't it possible to retrieve from the dimension set
specification if it is unipolar or bipolar?

Core 5: Couldn't we use one default set of action tendencies and use
mapping table to adapt the action tendencies to other entities (eg
robot, virtual agent, ...)?

Core 6:
Somehow I find difficult to define the added value of having a tag
'complex emotion' without having a tag 'regulation'. That is we consider
only one type of complex emotion, namely the superposition of  several
emotions. If it is so, we can express superposition of several emotions
through the time information of each emotion.

>
> * Global metadata -- individual key-value pairs or a parent element 
> with arbitrary sub-elements?
It seems quite difficult to define values for variables that could be
set of any type of values. The scope of these variables is really large.
The option 2 seems to allow for the description of a such broad field.
> * Timing issues in Links to the rest of the world:
>   - do we need absolute time, or only relative time?
I would say both.
if we take ECA as example:
we may require to generate an expression of emotion 2sec from the
beginning of the interaction. This time is absolute.
If we follow the SAIBA platform, timings of communicative acts are
computed at the last step of the generation process.  In the first step,
communicative intent and emotions are planned, then multimodal behaviors
are planned and it is only at the third step that the multimodal
behaviors are realized. it is only at this stage that timing information
are computed. Before this last stage, if we go back to our example, we
do not know to what corresponds 2sec.

But we can also imagine we need to specify an emotion relative to a
certain event. we need to be able to use relative timing then.
> 30.123") or number-of-milliseconds?
>   - is start+end or start+duration sufficient, or would you need more 
> fine-grained temporal landmarks such as onset+hold+decay?
how much fine-grained do we want to go.
If you look at the work Tanja Banziger has done when annotating facial
expressions from the GEMEP corpus and if you look at the work of
Christine Lisetti that aimed to implement Scherer's model, you view
'onset-hold-decay' as not fine-grained enough. Scherer believes that
SECs are sequential. To each SEC corresponds facial action. Thus the
expression of emotion is built (thus shown on the face) sequentially as
SECs are getting appraised. I am simplifying of course. but when Tanja
annotated the GEMEP corpus facial actions were not at all following the
nice trapezoid shape of onset-apex-offset. Identically Lisetti raised
such an issue in her work that she presented at the Humaine summer
workshop in Genova 2006.
However most current ECAs system have implemented solely the trapezoid
shape...
So I do not how far ahead we want to be and allow one to specify time at
such a fine grained to annotate and generate facial actions linked to
SECs...
>
> * Semantic issues in Links:
>   - do we need any additional semantic roles, apart from 
> "experiencer", "trigger", "target" and "behaviour"?
>
sorry , not much idea about this issue...
>
> Also remember Enrico's question regarding Meta 2:
>
> * Modality:
>   - do you see a use for a "medium" attribute in addition to a "mode"?
from a 'generation' point of view, I do not see the need to
differentiate 'medium' and 'attribute'. but I do not know if it is like
that from 'annotation' or 'analysis' points of view.
>   - do you have a preference for how to specify multiple modalities?
sorry I do not see what could add the explicit annotation.

Best,

Catherine
>
>
> Even if you cannot participate in the meeting, your input *before* the 
> meeting can be very helpful. Of course for the meeting participants, 
> it also makes a lot of sense to come prepared...! :-)
>
> Best wishes, looking forward to a fruitful meeting,
> Marc
>


-- 
Dr. Marc Schröder, Senior Researcher at DFKI GmbH
Coordinator EU FP7 Project SEMAINE http://www.semaine-project.eu
Chair W3C Emotion ML Incubator http://www.w3.org/2005/Incubator/emotion
Portal Editor http://emotion-research.net
Team Leader DFKI Speech Group http://mary.dfki.de
Project Leader DFG project PAVOQUE http://mary.dfki.de/pavoque

Homepage: http://www.dfki.de/~schroed
Email: schroed@dfki.de
Phone: +49-681-302-5303
Postal address: DFKI GmbH, Campus D3_2, Stuhlsatzenhausweg 3, D-66123 
Saarbrücken, Germany
--
Official DFKI coordinates:
Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
Geschaeftsfuehrung:
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313

Received on Tuesday, 21 October 2008 15:27:49 UTC