W3C home > Mailing lists > Public > public-xg-emotion@w3.org > October 2008

Re: [EMOXG] Updated discussion document, feedback requested

From: Kostas Karpouzis <kkarpou@cs.ntua.gr>
Date: Thu, 23 Oct 2008 15:40:36 +0300
Message-ID: <490070C4.8020602@cs.ntua.gr>
To: Marc Schroeder <schroed@dfki.de>
CC: EMOXG-public <public-xg-emotion@w3.org>

Hi all,

some notes from the ICCS/NTUA side:

- Core 3: scale definitions should be available from the definition of 
the dimension set. However, in some cases there are multiple (and not 
necessarily equivalent) definitions, e.g. in the PAD model you can have 
continuous *and* sampled (condensed) values. Declaring the dimension set 
explicitly should suffice to resolve such ambiguities.

- Meta 2: I think that the distinction between medium and mode is useful 
for a wide range of contexts. For instance, consider an example where 
one expert annotates a video based on facial expressions (mode) only, 
and at a later stage annotates based on gestures (same medium/visual, 
different mode) From the synthesis side, you may want to illustrate 
expressivity based on the <emotion> tag on a talking head, in which you 
can render facial expressions but not gestures; thus it would be useful 
to differentiate.

- Links 2 (samples): samples in a video file are usually associated with 
the original or digitized format (PAL and NTSC use different frame per 
second counts). If you also consider web camera videos which can be 
captured at arbitrary rates, the situation becomes even more complex. 
imho, this is too-system centric.

- Scale values: I would go for abstract scales, based on the comment 
provided in the spec.

Best regards,

Marc Schroeder wrote:
> Hi all,
> I have updated the discussion document (attached), summarising the state 
> of our spec drafting. Updates are in Meta 1 and Meta 2, as well as an 
> added discussion point regarding the possible use of QNAMES as attribute 
> values, in Core 2.
> IMPORTANT: Those of you who cannot participate in the face-to-face 
> meeting this Friday, please *read the document*, and send your views on 
> issues where DISCUSSION is NEEDED. In particular, we aim to make 
> progress on the following points, so here your input is highly welcome:
> * Global metadata -- individual key-value pairs or a parent element with 
> arbitrary sub-elements?
> * Timing issues in Links to the rest of the world:
>   - do we need absolute time, or only relative time?
>   - do you prefer a human-readable time format ("00:30.123") or 
> number-of-milliseconds?
>   - is start+end or start+duration sufficient, or would you need more 
> fine-grained temporal landmarks such as onset+hold+decay?
> * Semantic issues in Links:
>   - do we need any additional semantic roles, apart from "experiencer", 
> "trigger", "target" and "behaviour"?
> Also remember Enrico's question regarding Meta 2:
> * Modality:
>   - do you see a use for a "medium" attribute in addition to a "mode"?
>   - do you have a preference for how to specify multiple modalities?
> Even if you cannot participate in the meeting, your input *before* the 
> meeting can be very helpful. Of course for the meeting participants, it 
> also makes a lot of sense to come prepared...! :-)
> Best wishes, looking forward to a fruitful meeting,
> Marc
Received on Thursday, 23 October 2008 12:41:33 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:52:15 UTC