W3C home > Mailing lists > Public > public-xg-emotion@w3.org > October 2008

Re: [EMOXG] Updated discussion document, feedback requested

From: Christian Peter <Christian.Peter@igd-r.fraunhofer.de>
Date: Wed, 22 Oct 2008 17:31:00 +0200
Message-ID: <48FF4734.1040103@igd-r.fraunhofer.de>
To: Marc Schroeder <schroed@dfki.de>
Cc: EMOXG-public <public-xg-emotion@w3.org>

Hi all,

find below my comments. Have fun in Cannes (and be productive)!

Christian

========

Core 1
-------
I am in favour of option 2. I agree that calling it "emotion" is 
irritating if it actually is a mood or affect, but then: novices 
don't know about these differences, and experts should know to deal 
with it. After all it is just a tag name.

_If_ we want to look for another name for that tag, what about:

<state>, or
<ars> for Affect Related State?

Personally I would be in favour of <state> because it also fits for 
states not considered by some to be affect-related but interesting 
for a lot of applications, such as stressed, interested, 
concentrated, eager, ...

Core 2
------
I quite like the qnames version, it's shorter yet readable.

Core 3
------
I go with Catherine. This could be specified in the set description.


Core 5
------
not my field really.


Core 6
------
Well, that's a complex topic :-)
First of all it is true that parallel ongoing emotions can be 
derived from time information.
The added value of doing it anyway (i.e. having an optional 
attribute 'complex=true') is for faster machine processing and 
better human readability.
I explain:
Scenario is that of analysis of sensor data. Multiple sensors have 
observed a person and their data have been analysed by 
sensor-specific emotion detection algorithms. Not surprisingly they 
come to different results (e.g. face: neutral, body: sleepy, voice: 
bored). These results are written in a database. Some algorithm 
analysing the data e.g. for all occurences of the person being bored 
only would have to check the whole lot of data if at the same time 
other emotions where detected as well. A 'complex=true/false' tag 
would reduce this effort to only those cases when true is set.
The advantage for humans should be obvious.

So who would benefit:
- the sensing and analysis faction
- visual analysis folks (e.g. analysis of automatically
   annotated videos) - checking for certain emotions
- the annotation faction? Just a feeling, I am not an annotator.


Meta 1
------
I don't see a need for confidence for complex emotion. Each single 
emotion that complex emotion is made of should have a confidence value.
I should always be sure if there is more than one emotion going on, 
so confidence for complexity is 1 always. If I understood this 
correctly?
To the Issue Note (by what means has confidence been determined): 
Could go in meta data on that study.


Meta 2
------
difficult.
Another and more delicate example would be:
<emotion>
     <category set="everyday" name="excited"/>
     <modality medium="visual" mode="body"/>

     <category set="everyday" name="angry"/>
     <modality medium="infrared" mode="body"/>

     <category set="everyday" name="excited"/>
     <modality medium="wearable" mode="body"/>
</emotion>

Here we have always the same mode (body) but different media which 
might even yield different insights. What now? That's quite related 
to complex emotions.
   So do we need the modality attribute to differentiate the states 
detected at the same time from different modalities, even on the 
same mode? I don't like this but I don't see an alternative yet.
   Obviously this should be optional, since in a lot if not most of 
the cases only one modality will be used for one mode.

A shorter version could use qnames
<emotion>
     <category name="everyday:excited"/>
     <source="body:visual"/>
     <category name="everyday:angry"/>
     <source="body:infrared"/>
     <category name="everyday:excited"/>
     <source="body:wearable"/>
</emotion>

Don't hit me for that 'source' name, just used it here to make my 
point. I think you know what I mean.


Links 2
-------
As Catherine I think both are necessary.
As for the structure, one could think of having time sets as well 
which specify the format and unit of the timing information.
Usage could be e.g.
<time set="absMillies" start="45" end="110"/>
<time set="absOnHoldDecay" on="45" hold="54" decay="66" end= "120"/>
<time set="relOnHoldDecay" on="02" hold="44" decay="84" end="123"/>
<time set="absHumanReadable" start="00:00:45" end="00:01:50"/>
and so on.

Don't know about the granularity. Personally I'm fine with 
onset-hold-decay.


Links 3
-------
I also think it is important to be able to specify the context, as 
well as semantic role. But I don't think this has to be mandatory.
There are a lot of rather simple and very clear scenarios with e.g. 
just the experiencer being of interest. Forcing those people to key 
in information they don't need for their purposes would discourage 
them from using our language.
People who need it will use it anyway.

Semantic roles: Wasn't it Jean-Claude who mentioned that the emotion 
of the observer is of interest, too? I'm not that much involved in 
annotation but I think sophisticated anno tools also check for the 
annotators state? Or was that just a "should"?
Having the option to also provide a set would be good I think.

Global Metadata
---------------
I go with Catherine (option 2)


Cheers!

-- 
------------------------------------------------------------------------
Christian Peter
Fraunhofer Institute for Computer Graphics Rostock
Usability and Assistive Technologies
Joachim-Jungius-Str. 11, 18059 Rostock, Germany
Phone: +49 381 4024-122, Fax: +49 381 4024-199
email: christian.peter@igd-r.fraunhofer.de
------------------------------------------------------------------------
Problems with the electronic signature? Please load the current root
certificate of the Fraunhofer-Gesellschaft into your browser!
(http://pki.fraunhofer.de/EN/)
------------------------------------------------------------------------ 
Received on Wednesday, 22 October 2008 15:38:55 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 22 October 2008 15:38:56 GMT