Re: [EMOXG] Updated discussion document, feedback requested

Hi all,

We are sorry for the delay, please find below our comments,


Core 1 -  We think it would be more appropriate to offer the possibility 
of using different types of emotion related states (mood, emotion, 
attitudes...)

Core 2 - We agree with having a repository somewhere.

Core 3 - Agree with Ian, we see appropriate to indicate the scale 
attribute.

Core 7 - Agree with Ian again, we are not sure about a separate 
intensity tag. Regarding with respect to dimensional classification, 
isn't it enough using values of different scales? We think the arousal 
scale is closely related to intensity. It might be useful for emotion 
categories. Actually, we are not sure about this point.

Meta 2 - We would include "physiological" among basic modalities. 
According to Lang, the physiological modality is one of the main 
modalities that are considered in the detection and expression of emotions.
             We also believe that it is interesting to include "medium" 
attribute as an optional attribute of modality tag.



Kostas Karpouzis escribió:
>
> Hi all,
>
> some notes from the ICCS/NTUA side:
>
> - Core 3: scale definitions should be available from the definition of 
> the dimension set. However, in some cases there are multiple (and not 
> necessarily equivalent) definitions, e.g. in the PAD model you can 
> have continuous *and* sampled (condensed) values. Declaring the 
> dimension set explicitly should suffice to resolve such ambiguities.
>
> - Meta 2: I think that the distinction between medium and mode is 
> useful for a wide range of contexts. For instance, consider an example 
> where one expert annotates a video based on facial expressions (mode) 
> only, and at a later stage annotates based on gestures (same 
> medium/visual, different mode) From the synthesis side, you may want 
> to illustrate expressivity based on the <emotion> tag on a talking 
> head, in which you can render facial expressions but not gestures; 
> thus it would be useful to differentiate.
>
> - Links 2 (samples): samples in a video file are usually associated 
> with the original or digitized format (PAL and NTSC use different 
> frame per second counts). If you also consider web camera videos which 
> can be captured at arbitrary rates, the situation becomes even more 
> complex. imho, this is too-system centric.
>
> - Scale values: I would go for abstract scales, based on the comment 
> provided in the spec.
>
> Best regards,
> Kostas
>
> Marc Schroeder wrote:
>> Hi all,
>>
>> I have updated the discussion document (attached), summarising the 
>> state of our spec drafting. Updates are in Meta 1 and Meta 2, as well 
>> as an added discussion point regarding the possible use of QNAMES as 
>> attribute values, in Core 2.
>>
>>
>> IMPORTANT: Those of you who cannot participate in the face-to-face 
>> meeting this Friday, please *read the document*, and send your views 
>> on issues where DISCUSSION is NEEDED. In particular, we aim to make 
>> progress on the following points, so here your input is highly welcome:
>>
>> * Global metadata -- individual key-value pairs or a parent element 
>> with arbitrary sub-elements?
>>
>> * Timing issues in Links to the rest of the world:
>>   - do we need absolute time, or only relative time?
>>   - do you prefer a human-readable time format ("00:30.123") or 
>> number-of-milliseconds?
>>   - is start+end or start+duration sufficient, or would you need more 
>> fine-grained temporal landmarks such as onset+hold+decay?
>>
>> * Semantic issues in Links:
>>   - do we need any additional semantic roles, apart from 
>> "experiencer", "trigger", "target" and "behaviour"?
>>
>>
>> Also remember Enrico's question regarding Meta 2:
>>
>> * Modality:
>>   - do you see a use for a "medium" attribute in addition to a "mode"?
>>   - do you have a preference for how to specify multiple modalities?
>>
>>
>> Even if you cannot participate in the meeting, your input *before* 
>> the meeting can be very helpful. Of course for the meeting 
>> participants, it also makes a lot of sense to come prepared...! :-)
>>
>> Best wishes, looking forward to a fruitful meeting,
>> Marc
>>
>
>
>
>

Received on Friday, 24 October 2008 09:48:45 UTC