W3C home > Mailing lists > Public > public-xg-emotion@w3.org > November 2006

Re: [EMOXG] Use Case 3 Requirements Analysis

From: Catherine Pelachaud <pelachaud@iut.univ-paris8.fr>
Date: Wed, 22 Nov 2006 13:35:00 +0100
Message-ID: <456443F4.5080904@iut.univ-paris8.fr>
To: Ian Wilson <ian@emotion.ai>
CC: public-xg-emotion@w3.org

Dear all,

This email is related to the Output Events description you have proposed.

Since few years, there have been work toward creating a unified 
multimodal behavior representation language to control ECAs and to 
describe its behaviors. A first specification has been designed and has 
been presented at last IVA conference (S. Kopp et al, "Towards a Common 
Framework for Multimodal Generation: The Behavior Markup Language"). The 
paper can be downloaded at
http://mindmakers.org/projects/BML

There is also a wiki page that describes the language in its current stage:
http://twiki.isi.edu/Public/BMLSpecification

This work is part of a larger framework, SAIBA described in:
http://mindmakers.org/projects/SAIBA
SAIBA stands for Situation, Agent, Intention, Behavior, Animation.

Within Humaine project, work has been done to define a Gesture 
Repository: that is a placeholder for behavior description. As for BML, 
the behaviors ought to be described in a player and model geometry 
independent manner. That is the description of a behavior should not 
rely on a particular geometry nor on a particular animation 
parametrization. A first specification of this language is described in 
a Humaine deliverable.
Right now, we are in the phase of combining Gesture Repository 
specification and BML. Though they are quite similar.

The idea underlying these efforts is to allow for mutualisation of work. 
Creating new behavior can be extremely time consuming. Sharing behavior 
definition could be of great help. Moreover mutualisation could also 
happen for sharing modules (animation player, behavioral engine, etc): 
having a common representation language is a first step toward allowing 
this exchange.

I would like to propose to use BML to describe multimodal communicative 
and emotional behaviors.

Sincerely,

Catherine


Ian Wilson a écrit :
> I have compiled an analysis of requirements from the suggestions or use case 3.
> This use case has requirements that are very similar to those described in the
> EARL specification.
>
> For all those members who have registered interest in use case 3 discussions
> (Jianhua and myself, Marc, Enrico, Jean-Claude, Paolo, Alejandra, Hannes,
> Catherine and Kostas) please look over the list for the following points:
>
> 1. Which items do you think should be cut from the set (if any)?
> 2. Which items do you think should be added to the set (if any)?
> 3. For requirements that you specified in the original set:
>    a. Have I interpreted them correctly?
>    b. Should they be listed differently?
>
> These questions should be enough for us to start I think. If you have any other
> ideas let me know, thanks.
>
> Best,
>
> Ian
> Emotion AI
> www.emotion.ai
> blog.emotion.ai
>
>   
Received on Wednesday, 22 November 2006 12:35:03 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 8 January 2008 14:21:16 GMT