W3C home > Mailing lists > Public > public-xg-emotion@w3.org > October 2006

Use case from CSTR

From: Rob Clark <robert@cstr.ed.ac.uk>
Date: Thu, 5 Oct 2006 15:14:11 +0100
Message-ID: <347cf4b50610050714q1b7f4428l96ac857480dd41db@mail.gmail.com>
To: public-xg-emotion@w3.org

We envisage building a "Digital Radio Presenter application", using
natural language and dialogue generation technology. The system would
present radio shows which would include introducing music,
interviewing guests and interacting with listeners calling in to the show.

A speech recognition component would need to pass information
concerning the emotional state of interviewees or callers to the
dialogue manager. Both quantitative and qualitative information and
timing information (or some other means of reference) would be needed
to align the emotional characteristics to orthographic or semantic information.

The language generation component would also need to pass information
regarding emotion to a speech synthesis component. The digital
presenter would use emotion (possibly exaggerated) to empathise with a
caller. The requirements of the annotation would be similar to the
above recognition component.


Rob Clark
Received on Thursday, 5 October 2006 14:14:16 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:52:13 UTC