Re: [EMOXG] Requirements UseCase 1+2+3

Dear all,

Because of the frustrating fact that days only have 24 hours, I am  
not formally part of this working group. I do, however, read (most  
of) the emails that are sent on the mailing list.

This is the reason why I thought you may be intereted in the  
following document: the slides of a presentation I made in the  
framework of HUMAINE about our work on the dimensionality of  
emotions. We have several papers in prep, I'll make sure to keep you  
posted if you are interested.

<http://emotion-research.net/ws/summerschool3/ 
RoeschFontaineScherer-06-Genova.pdf>

I hope this helps.
Best regards and happy end of the year!

-----
Etienne Roesch, PhD candidate / Teaching-Research Assistant
Geneva Emotion Research Group
University of Geneva (Office 5137) - Bd du Pont d'Arve 40
1205 Geneva - SWITZERLAND
Tel/Fax: +41 (0)22 379 92 27/19 - Cell: +41 (0)79 773 62 93
http://www.unige.ch/fapse/emotion/members/etienne/
http://www.affective-sciences.ch


Le 23 nov. 06 à 23:05, Hannes Pirker a écrit :

> Dear Jean-Claude, Christian, Ian
>
> apologize for me de-compartmentalizing the discussion on the different
> UCs.  I had a look at each of the tables you sent out, and I will try
> to comment on some of the specific labels below.
>
> But on the other hand there are also some 'global' observations.
>
> I tried to evaluate the amount of overlap between the different
> collections - simply by gluing the three UCs together on an single
> sheet of paper ;-)
>
> The interesting part of the whole discussion seems to be the point on
> how "broad" or "narrow" the annotation scheme should be.
>
> In EARL we went for the "narrow" approach, i.e. try to concentrate on
> the topic of describing Emotion-entities and not much else.
>
> In the current UCs we see more 'broad' approaches to a different
> degree:
>
> In UC1 there are e.g. attempts to provide labels for
> describing the communicative setting: the individual, the
> interactional situation, the target of the emotion etc.
>
> In UC2: labels for describing the technical environment: sensors, the
> application, etc.
>
> In UC3: there are labels for Input Events and Output Events.
>
> -- 
>
> There are differences in the way these (in my opinion) more peripheric
> information is to be encoded: JeanClaude & Christian heavily rely on
> 'simply' using pointers to external entities. That's also the approach
> we undertook in EmoLang, because it helps to keep the
> 'core'-representations tidy... and we do not face the problem of
> ending up with the problem to come up with a representation-format
> that has to be able to specify 'the whole world', as virtually
> 'everything' can have an influence on our emotions and the way we
> express them!
>
> On the other hand Kostas already put the finger to it: while pointers
> help us to 'export' the problems of finding common representations,
> they on the other hand diminish the value of the language, because
> there is no "standard" left on which an application can rely.
>
>
>
> The good news, on the other hand is, that there already is a big
> amount of agreement on the necessary labels for describing emotions
> themself, the differences are mainly terminological I think. Emotion
> Categories, Emotion Dimensions, Intensity, Regulation, Mixture of
> Emotions... we all seem to agree on these.
>
> (On Dimensions I would like to point to the current work of Etienne
> Roesch, Geneva, who is conducting the GRID study, were emotional terms
> in English, French & Flamish are carefully related / located in the
> dimensional space. And he is using an additional dimension
> "Expectedness/Unpredictability", which seems to play an important
> role. So we should keep in mind, that the 3 dimensional approach is
> not 'God given').
>
> <side-comment>
> If I understand Kostas' point on 'pointers' correctly
> he is warning us: "what is a standard good for, if it (almost only)
> consists of bits that are NOT standardized"? We should keep the same
> point in mind when it comes to the discussion of the
> core-of-the-core-elements such as emotion-categories. Up to now we are
> carefully avoiding any discussion on what categories are to be
> used. In fact in EMOLANG we defenitely left this open to each
> individual user of EMOLANG to define his/her own 'dialect'. Probably
> this is the only way to cope with the split-up of emotional theories &
> different application scenarios. But we should keep this topic in mind
> for future discussion: are there ways to ensure the *usability* of the
> language if we mostly impose restrictions on its syntax and not so
> much on its content.
> </side-comment>
>
> o.k. here are some remarks to the specific UCs
>
> UC1 JeanClaude
> ---
>
> I do not fully understand the distinction between "Requirements" and
> the items listed under "And for each emotion segment". Is
> "Requirements" to be specified only once for each episode or
> data-base?
>
> if this is so then I wonder:
>
> i) 'acting' and 'single or complex emotion' : why are these not
> mentioned under "And for each emotion"
>
> ii) media-type: why not to move this up to "requirements"?
>  (btw. some lines seem to have wandered off in the table: media types
>  (text, audio,...) are mentioned under "confidence"...
>
>
> UC2 Christian
> ----
>
> I do not fully understand the following items:
>
> i) super categories ?? -> is this a matching of an emotion-category to
> another, broader one? Maybe this should be done externally?
>
> ii) "time-span the emotion last" vs. "time stamp" ?? I think you
> yourself were not comletely sure on whether there are some
> redundancies in this?
>
> iii) "purpose of classification" : I also would like to see more
> concrete examples for this.
>
>
> UC3 Ian
> ----
>
> You already got some remarks concerning the representation of "Output
> Events", Catherine pointed out BML for this purpose. I have to commit
> that while I tend to be slightly more 'tolerant' when it comes to the
> demands of UC1 and UC2 to include ways to encode additional
> information (e.g. pointers to recording situations or to
> person-descriptions) I am rather unhappy with the inclusion of "output
> events" in the EMOTION-representation language itself, if this turns
> out to become another attempt to specify a 'Behaviour Representation
> Language' for avatars/ECAs. I am at least superficially familiar with
> the numerous attempts to do so (e.g. BML), and I would definitely not
> advise the EMOXG group to go for the 're-invention of the flat tire'!
>
> For the topic of "Input Events" I have not yet found out whether they
> are not just another way to talk about appraisals?? I do not really
> know enough about appraisals & how they should be made part of the
> EMO-lang to be really pre-occupied about this topic.
>
> Hannes
>
> -- 
> Hannes Pirker  -- Austrian Research Inst. for Artificial  
> Intelligence     --
> hannes(DOT)pirker~AT~ofai.at +43/1/532 4621-3  www.ofai.at/ 
> ~hannes.pirker --
>

Received on Friday, 22 December 2006 14:50:26 UTC