Deaf Signing and Timed Text

I would like to find out if it is possible for the TTWG to include 
support for Deaf and hard of hearing people.

The EU Framework 5 project ViSiCAST has been developing technology 
for avatar-based deaf signing. We are developing SiGML (Signing 
Gesture Markup Language), an XML language which enables signing to be 
expressed using a notation developed from HamNoSys (Hamburg Notation 
System) that is used extensively in sign language research. SiGML 
will incorporate SMIL modules wherever possible.

ViSiCAST applications include broadcast (closed captioning) and web 
content. Our aim is to make SiGML available for adoption by others 
but we have so far not made serious contact with the W3C process. The 
Timed-Text work seems very appropriate indeed for the broadcasting 
applications where we are represented by both the BBC and ITC in the 
UK.

As well as support for sign language, we can also consider 
applications for lip-readable avatars for the hard of hearing, driven 
by text or a phoneme stream. The result can be much more expressive 
than a standard talking head because we can include facial and manual 
gestures.

This list seems to be rather inactive so perhaps the activity is 
elsewhere, or the job is done. I would be grateful for feedback from 
list members on whether they think there is scope for including the 
sort of support I am proposing.

Best wishes,

John
-- 
Prof. John Glauert                               Tel: +44 1603 592603
UEA ViSiCAST Project                             Fax: +44 1603 593345
School of Information Systems            Home Office: +44 1603 462679
UEA Norwich,  NR4 7TJ, UK           http://www.visicast.sys.uea.ac.uk

Received on Sunday, 1 December 2002 12:25:58 UTC