Re: Deaf Signing and Timed Text

Dear John:

We at JSRPD (www.dinf.ne.jp) Information Center have been active to develop
real time caption transmission/receiving software tools for both deaf and
deaf-blind people.
Currently, deaf-blind people may share our caption transmission over the
Internet.  We see the timed-text standardization will be of great help for
deaf, deaf-blind, blind, dyslexic, intellectually challenged, autism, etc.,
just to mention about disability.  Of course it will help the general public
in particular illiterate and/or language minority people.
One of JSRPD projects, Adaptive Multimedia Information System
(www.amisproject.org), is also tackling support for all kind of disabilities
including deaf people.  Please visit our web site.  It will be demonstrated
in combination with DAISY (www.daisy.org) Multimedia contents at the World
Bank in conjunction with the World Bank Conference: Disability and
Development.

Hiroshi Kawamura
Director, Information Center
Japanese Society for Rehabilitation of Persons with Disabilities
www.dinf.ne.jp
www.jsrpd.jp
www.normanet.ne.jp

----- Original Message -----
From: "John Glauert" <J.Glauert@sys.uea.ac.uk>
To: <www-tt-tf@w3.org>
Sent: Monday, December 02, 2002 2:25 AM
Subject: Deaf Signing and Timed Text


>
> I would like to find out if it is possible for the TTWG to include
> support for Deaf and hard of hearing people.
>
> The EU Framework 5 project ViSiCAST has been developing technology
> for avatar-based deaf signing. We are developing SiGML (Signing
> Gesture Markup Language), an XML language which enables signing to be
> expressed using a notation developed from HamNoSys (Hamburg Notation
> System) that is used extensively in sign language research. SiGML
> will incorporate SMIL modules wherever possible.
>
> ViSiCAST applications include broadcast (closed captioning) and web
> content. Our aim is to make SiGML available for adoption by others
> but we have so far not made serious contact with the W3C process. The
> Timed-Text work seems very appropriate indeed for the broadcasting
> applications where we are represented by both the BBC and ITC in the
> UK.
>
> As well as support for sign language, we can also consider
> applications for lip-readable avatars for the hard of hearing, driven
> by text or a phoneme stream. The result can be much more expressive
> than a standard talking head because we can include facial and manual
> gestures.
>
> This list seems to be rather inactive so perhaps the activity is
> elsewhere, or the job is done. I would be grateful for feedback from
> list members on whether they think there is scope for including the
> sort of support I am proposing.
>
> Best wishes,
>
> John
> --
> Prof. John Glauert                               Tel: +44 1603 592603
> UEA ViSiCAST Project                             Fax: +44 1603 593345
> School of Information Systems            Home Office: +44 1603 462679
> UEA Norwich,  NR4 7TJ, UK           http://www.visicast.sys.uea.ac.uk
>
>

Received on Sunday, 1 December 2002 17:03:52 UTC