RE: Facial Animation in the Speech Synthesis Markup Language

James,

Currently there are few members in the W3C Voice Browser working group who
have the expertise necessary to develop a speech synchronized animation
standard.  

There are several implementations of speech synchronization animations,
including

Fluent Speech Technologies, http://www.fluent-speech.com/agents.htm, 
Haptic, http://www.haptek.com/, 
Microsoft Agents
Dom Massaro's "Baldy" at UC Santa Cruse has a collection of markup tags to
not only control mouth movements and facial gestures, but also emotions and
body gestures.
The University of California, Santa Cruse website at
http://mambo.ucsc.edu/psl/fan.html contains pointers to many animated head
projects.
The synthetic news caster Ananova, http://www.ananova.com/  

Perhaps now is the time to get interested parties together and construct a
standard, either as part of the W3C Voice Browser Working group, or perhaps
as part of a new working group on multimedia dialogs.  What do you think,
James?  Would you be willing to participate?  Do you know of others who
would be willing to participate?

Regards,

Jim Larson
Chairman, W3C Voice Browser Working Group 

-----Original Message-----
From: James Edge [mailto:j.edge@dcs.shef.ac.uk]
Sent: Thursday, October 19, 2000 4:07 PM
To: www-voice@w3.org
Subject: Facial Animation in the Speech Synthesis Markup Language


I am aware that speech synchronised animation has been discussed in
relation to the new Speech Synthesis Markup Language (requirement 4.5),
but has been marked as 'nice to have'. Does anybody know whether this
feature is likely to appear in early revisions of the specification, and
if so is there any information upon how it will be implemented?
Thanks,
    James Edge

Received on Thursday, 19 October 2000 12:07:25 UTC