Use case from CSTR

We envisage building a "Digital Radio Presenter application", using
natural language and dialogue generation technology. The system would
present radio shows which would include introducing music,
interviewing guests and interacting with listeners calling in to the show.

A speech recognition component would need to pass information
concerning the emotional state of interviewees or callers to the
dialogue manager. Both quantitative and qualitative information and
timing information (or some other means of reference) would be needed
to align the emotional characteristics to orthographic or semantic information.

The language generation component would also need to pass information
regarding emotion to a speech synthesis component. The digital
presenter would use emotion (possibly exaggerated) to empathise with a
caller. The requirements of the annotation would be similar to the
above recognition component.

Regards,
Rob.

-- 
Rob Clark

Received on Thursday, 5 October 2006 14:14:16 UTC