Re: [EMOXG] Updated discussion document, feedback requested

Dear all,

Please find below some comments.

I am currently attending the AFFINE workshop
and will not attend the Cannes meeting.

Best wishes for the meeting,

Jean-Claude MARTIN
CNRS-LIMSI

------------------------------------------------
Core 1. Type of emotion-related phenomenon
DISCUSSION NEEDED: There was some concern about using the tag name
“emotion” in option 2,
given the fact that it is one of the possible types of emotion-related
states. Some group members
would prefer to use a generic word such as “affect”. Others prefer
“emotion” because it is easier to
understand for the non-expert.

=> JCM:
I vote for option 2 (cf below) with the keyword "affect".
I think that it is important that we propose terms that are consistent
with theories.
	<affect type="mood" emotion-related-state-types="Scherer">
		<category set="everyday" name="relaxed" confidence="0.8" />
	</affect>

------------------------------------------------
Core 2. Emotion categories
DISCUSSION NEEDED: It is unclear at the moment how, and where, the set of
allowed categories should be defined. Maybe there should be a repository
of defined sets somewhere.

=> JCM:
I think that a repository of defined sets would be great.
Would it be possible to host them on the humaine portal?
It might be useful to be able to specify an url so that researchers can
upload their sets on their own web site.

------------------------------------------------
DISCUSSION NEEDED: An alternative to the combindation of “set” and “name”
attributes may be the use of QNAMES (names with a namespace prefix) as
attribute values in the “name” attribute,

=> JCM:
I prefer separate specifications of set and name in different attribute
as it provides explicit specification of both terms.
With the QNAMES option, it is not explicit that "everyday" is the name of
the set.

------------------------------------------------
Core 3. Emotion dimensions
DISCUSSION NEEDED: It is unspecified at the moment how to indicate whether
a given
dimension is unipolar or bipolar. Maybe this should be done wherever the
set is defined.

=> JCM:
It might be included in the name of the set:
set="Arousal(bipolar)-and-Valence(bipolar)"
or by an explicit spec of the min and max value
set="Arousal(-1:1)-and-Valence(-1:1)"

------------------------------------------------
Core 6. Multiple and/or complex emotions
DISCUSSION NEEDED: There is currently no agreement whether it is necessary
to make it
explicit that emotions are complex.

=> JCM:
Would it be possible to insert an ID attribute in the "emotion" (or
"affect") tag
so that they can be refered to in further specifications such as complex
emotions?
Example:

<affect type="mood" emotion-related-state-types="Scherer" ID = 1>
	<category set="everyday" name="relaxed"/>
</affect>

<affect type="emotion" emotion-related-state-types="Scherer" ID = 2>
	<category set="everyday" name="surprised" />
</affect>

<affect-combination>
	<affect-reference ID = 1/>
	<affect-reference ID = 2/>
</affect-combination>

It would make thing simpler since we can separate spec of "simple affect"
and "complex affect".
It could be also useful for specifying additional information about a
given affect
(eg additional interpretation, synthesis of the expression of this affect
finished, ...).

This is similar to the suggestion in Meta2. Modality.

------------------------------------------------
Meta 2. Modality
DISCUSSION NEEDED: Do we want to add the distiction between medium and
mode like in
emma (http://www.w3.org/TR/emma/#s4.2.11)?

=> JCM:
I would include the medium attribute as an optional attribute of the
modality tag.
It would show compatibility efforts with EMMA.
We might want to add in the basic modality set physiological.


------------------------------------------------
DISCUSSION NEEDED: The use of composite attributes for multimodality. We
could use an
explicit annotation like this:
<emotion>
<category set="everyday" name="excited"/>
<modality id="m1" set="basic_modalities" mode="face"/>
<modality id="m2" set="basic_modalities" mode="voice"/>
...
</emotion>

Or, in alternative, a more compact annotation that makes use of composite
values, i.e. lists of values
separated by special chars (a blank in the following example):
<emotion>
<category set="everyday" name="excited"/>
<modality set="basic_modalities" mode="face voice"/>
...
</emotion>

=> JCM:
I prefer the first version which is more explicit and makes easier later
references to specified modalities.

------------------------------------------------
Links 2. Position on a time line in externally linked objects

=> JCM:
I think that:
- timing specification should be optional
- absolute time should be
- relative format should be possible both in EMMA and SMIL format

------------------------------------------------
Global 0. A generic mechanism to represent global metadata

DISCUSSION NEEDED: Two options were proposed for representing metadata at
the document
level, see
http://www.w3.org/2005/Incubator/emotion/minutes/2008-09-18.html#item1.
Option 1: Individual key-value pairs.
Option 2: A parent element with arbitrary subelements, as in EMMA
(http://www.w3.org/TR/emma/#s4.1.4):

=> JCM:
I prefer option 1 as it is simpler.

------------------------------------------------



> Hi all,
>
> I have updated the discussion document (attached), summarising the state
> of our spec drafting. Updates are in Meta 1 and Meta 2, as well as an
> added discussion point regarding the possible use of QNAMES as attribute
> values, in Core 2.
>
>
> IMPORTANT: Those of you who cannot participate in the face-to-face
> meeting this Friday, please *read the document*, and send your views on
> issues where DISCUSSION is NEEDED. In particular, we aim to make
> progress on the following points, so here your input is highly welcome:
>
> * Global metadata -- individual key-value pairs or a parent element with
> arbitrary sub-elements?
>
> * Timing issues in Links to the rest of the world:
>    - do we need absolute time, or only relative time?
>    - do you prefer a human-readable time format ("00:30.123") or
> number-of-milliseconds?
>    - is start+end or start+duration sufficient, or would you need more
> fine-grained temporal landmarks such as onset+hold+decay?
>
> * Semantic issues in Links:
>    - do we need any additional semantic roles, apart from "experiencer",
> "trigger", "target" and "behaviour"?
>
>
> Also remember Enrico's question regarding Meta 2:
>
> * Modality:
>    - do you see a use for a "medium" attribute in addition to a "mode"?
>    - do you have a preference for how to specify multiple modalities?
>
>
> Even if you cannot participate in the meeting, your input *before* the
> meeting can be very helpful. Of course for the meeting participants, it
> also makes a lot of sense to come prepared...! :-)
>
> Best wishes, looking forward to a fruitful meeting,
> Marc
>
> --
> Dr. Marc Schröder, Senior Researcher at DFKI GmbH
> Coordinator EU FP7 Project SEMAINE http://www.semaine-project.eu
> Chair W3C Emotion ML Incubator http://www.w3.org/2005/Incubator/emotion
> Portal Editor http://emotion-research.net
> Team Leader DFKI Speech Group http://mary.dfki.de
> Project Leader DFG project PAVOQUE http://mary.dfki.de/pavoque
>
> Homepage: http://www.dfki.de/~schroed
> Email: schroed@dfki.de
> Phone: +49-681-302-5303
> Postal address: DFKI GmbH, Campus D3_2, Stuhlsatzenhausweg 3, D-66123
> Saarbrücken, Germany
> --
> Official DFKI coordinates:
> Deutsches Forschungszentrum fuer Kuenstliche Intelligenz GmbH
> Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
> Geschaeftsfuehrung:
> Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender)
> Dr. Walter Olthoff
> Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes
> Amtsgericht Kaiserslautern, HRB 2313
>

Received on Thursday, 23 October 2008 07:18:53 UTC