Beyond text captions Re: Deaf users,

This is an exploration of some ideas. I have got a bit of review from  
Inmaculada Fajardo Bravo, an cognitive scientist working with deaf  
people, and I have asked several others to have a quick look. But the  
responsibility for whether the ideas are sound or insane is all mine.

I am hoping to identify issues where WAI's current work could be  
improved, and take them to the relevant working groups, and in the  
process give us all an opportunity to evaluate what we are doing in the  
light of experience from people who are our audience. Don't tell me I  
am an idiot, because I know how much of an idiot I am. Do tell me that  
I have misunderstood or forgotten or misrepresented something...

Feel free to enjoy or stop reading and delete :-)

For a community of deaf users who are not good readers, signing is  
their native language. Captioning is considered a nice idea, but not  
actually the preferred way, for many deaf people, of understanding what  
is happening. LIkewise, text chat is considered a good thing.  
(Real-time character-by-character interactive, as provided by text  
phones or the unix "talk" program, more than the line-mode  
'asynchronous' modern chat software or SMS). But this community is much  
happier signing, and would prefer to be able to do that as a way of  
communicating.

There are a couple of approaches to having signing on the phone. The  
obvious one is video-phone. Work done in the Deaf Australia Online  
project and its successor looked at the actual technical requirements  
for making this workable (by testing with users of Auslan. Mileage may  
differ in other sign languages, but it is at least a marker point).  
Their report said

"The minimum standard of video transmission for Deaf signing should  
offer a temporal resolution of 25 frames per second with a picture  
resolution of 352x288 pixels (CIF) which generally requires a  
transmission rate of 384kbps" [dao2]

This, coincidentally, is what is available on the Allen-10  
multifunction terminal that they recommended buying for people. However  
this kind of bandwidth and functionality is now pretty widely available  
- to the extent of being possible on mobile telephones, if not yet  
widespread. In addition, better compression and video technology can  
reduce the bandwidth required.

Another approach is the use of signing avatars - animated figures.  
There are several companies who have developed such systems, and on  
tools that can identify and record people signing. The benefit of this  
is that you can reduce the bandwidth to the transmission of  
instructions for the animation, with the drawback (for many people) of  
not being able to see the person you are communicating with. Although  
there are various methods for encoding the information, as far as I  
know there is no consensus on a standard,  which makes it difficult to   
deploy these systems in the mainstream Web today [signAv].

On the other hand, this seems a field that is ripe for work, since it  
should be possible to integrate this kind of work with indexing and  
dictionary lookup systems that are being used to build large-scale  
translation and search services, that work by brute-force processing of  
lots of information instead of trying to work with artificial  
intelligence approaches. Although this doesn't always work, for many  
use cases it becomes relatively simple to produce effective results.

Another issue is the way that people process information. Although this  
has been extensively studied by cognitive scientists, and the results  
have been adopted to some extent by usability specialists, the  
penetration of this information to accessibility seems fairly low. One  
interesting piece of work by Inmaculada Fajardo and others in Spain  
suggests that signing deaf users seem to have particular requirements  
in this area which are not necessarily the same as are useful for  
others [inma].

What are the implications for these findings if we can in fact provide  
semantically rich guides to the content of a site, and manipulate them  
on the fly (including being able to swap around labelling schemes  
across languages - something we could expect to be simple to develop in  
the context of European sites which already have the relevant  
information)? At the very least, going beyond WCAG's "provide different  
serach mechanisms" to some particular examples based on user testing,  
in line with everything we know, and perhaps even tested carefully  
themselves would be a step forward...

I'm not offering any particular answers here, and I am not even sure  
that I understand all the questions. But it seems there are some  
interesting ones that people have already studied, and some results  
that don't necessarily fit with the current thrust of work on the WAI  
guidelines as well as they could if that work was taken into account.

[signAv] several links:  
http://www.rnid.org.uk/html/information/technology/visicast.htm  
http://www.vcom3d.com/ and a survey with lots of references in  
Google-converted HTML at (long URI)  
<http://www.google.com/search?q=cache:DpUkrCplGfcJ: 
www.casadecritters.com/Becky/ 
Journal_Draft.doc+Auslan+avatar&hl=en&ie=UTF-8>

[dao2] http://www.circit.rmit.edu.au/projects/dao2/DAO2_final.pdf (3 MB  
PDF. Relevant section is the 58th page of the document, which has page  
number 52)

[inma] http://www.ugr.es/~ergocogn/articulos/working_deaf.pdf (Working  
Memory Architecture and its Implications for Hypertext Design: Insight  
from Deaf Signer Users Research - published in the proceeding of the  
HCI conference, 2003)

Random other references:

Signwriting - A set of systems for encoding different kinds of  
movement, including signs, dance and others:  
http://www.signwriting.org/
RNID - a group that does a lot of stuff in Europe:  
http://www.rnid.org.uk
Gallaudet University - an American University for the Deaf:  
http://www.gallaudet.edu
NMIT centre of excellence - some folks in Melbourne who work on the  
stuff: http://online.nmit.vic.edu.au/deaf/

On 28 Feb 2004, at 11:22, Charles McCathieNevile wrote (in reporting a  
conference on access to information technology for deaf people):

> Captioning got a run, (I'll leave it to the experts, but there were no  
> great surprises) as did the need for sign-language interpretation,  
> particularly where the content was moderately complex and the  
> captioning risked losing the many relatively poor readers among the  
> deaf community.
>
--
Charles McCathieNevile                          Fundación Sidar
charles@sidar.org                                http://www.sidar.org

Received on Wednesday, 31 March 2004 17:50:48 UTC