Re: Beyond text captions Re: Deaf users,

On 12 Apr 2004, at 20:45, Guido wrote:

> Thanks for sharing your thoughts with us and apologies for the 
> lateness of this reply, but I have been extremely busy lately.

As have I, but I finally got around to getting permission from Inma 
Fajardo to post her responses here as well. The following text is hers:

My experience with deaf people in the field of web interaction is not as
extensive and it's more specialized on their cognitive functioning. 
However,
I'll try to contribute with some  of our insights.

> I am hoping to identify issues where WAI's current work could be
> improved, and take them to the relevant working groups, and in the
> process give us all an opportunity to evaluate what we are doing in the
> light of experience from people who are our audience.

In my opinion, the main problem of WAI's current work is that deafness 
is
almost exclusively considered a sensorial problem, that is, a deficiency
characterized by the lack of auditory stimuli processing by the sense of
hearing. Consequently, the accessibility guidelines for deaf users are
focused on overcoming this sensorial problem, for instance, providing 
visual
information instead of acoustic one. However, deafness also influences 
the
functioning of cognitive processes and the representation and 
organization
of knowledge, affecting complex tasks such as problem solving and 
decision
making (Marschark, 2003).

Nevertheless, these cognitive peculiarities are not necessarily 
negative if
designers of devices and interaction systems (e.g. web designers) take 
them
into account. For example, it has been demonstrated that the use of a
visuospatial language (sing language) improves some aspects of other
visuospatial tasks, such as the memory for spatial places (deaf signers 
have
a longer spatial memory span than the hearing non-signers ([Wilson,
1997-266]), the discrimination of faces, the processing of facial
characteristics and the recognition of faces or shoes (Arnold and Mills,
2001). Precisely, as a cognitive ergonomist my work lies in studying and
researching how this advantage of deaf people could be harnessed for
interacting with hypertexts.

The conclusion of our recent empirical work is that websites' designers
could distribute verbal content along more layers of nodes in the 
hypertext
structure which, in addition, could serve as semantic spatial clues for 
text
comprehension.

On the other hand, it is possible to formulate the question: could the
replacement of textual information by visual content improves web
interaction of deaf users? We have empirical data that would support 
this
guideline but only partially in the case of information retrieval tasks
(Fajardo, Cañas, Salmerón and Abascal, 2004). Deaf signer users only 
improve
the web searching with visual targets when the search does not imply a
categorical decision, that is, when there are not involved semantic 
factors
in the information retrieval task and the search is based on visual 
factors
as visuo-perceptual speed (related to the visuospatial store of working
memory process).

Following the cited finding, some semantic aspects related to long term
memory of users (LTM) seem fundamental to perform information search 
task in
a Web Site. Whether users are not able, or have difficulties, to 
generate
the category where the concept they are searching for could be (for
instance, the category Sports, if users are searching the news 
concerning to
football matches in a digital newspaper), it is probable that their
performance would drop. This fact could be applicable to both, verbal 
and
graphical interfaces. In the case of icons, different ways of organizing
knowledge in memory could affect the users' judgement of semantic 
distance
(or judgement of the relation icon-referent) and, in this manner, the
efficiency in the selection of the icon which would open the searched 
site
or would activate the desired function. Actually, we have found that 
deaf
signers had more problems than hearing non signers to find visual 
targets in
a newspaper website when the targets were in a deeper layer of the web
structure and it was necessary to take more categorical decisions for
finding it. We have concluded that the qualitative difference in 
knowledge
organization between deaf and hearing people, found in a previous 
normative
study about semantic distance of icon-targets used in the experiment, 
could
be determining the difference in the web information retrieval task.

That is, if we have to use icons, images or pictures for information
retrieval task, we have to take into account that all users do not 
extract
the same meaning from them. This could mediate the applicability of
accessibility guidelines for deaf  and cognitive disable people, such 
as,
provide well illustrated text (WAI, 1999), provide content-related 
images in
pages full of text (WAI, 1999) or provide visual information instead of
acoustic one (Emiliani, 2001).

Anyway, what is important of the empirical studies with real users is 
that,
on some occasions, the apparently useful solutions are not so useful. 
For
this reason, the analysis of users cognitive processing and the 
empirical
research are fundamental.

> For a community of deaf users who are not good readers, signing is
> their native language. Captioning is considered a nice idea, but not
> actually the preferred way, for many deaf people, of understanding what
> is happening. LIkewise, text chat is considered a good thing.
> (Real-time character-by-character interactive, as provided by text
> phones or the unix "talk" program, more than the line-mode
> 'asynchronous' modern chat software or SMS). But this community is much
> happier signing, and would prefer to be able to do that as a way of
> communicating.

Apart from the low technological requirements, one explicative 
hypothesis of
deaf people's preference for "text chat" communication is that they can 
use
an idiosyncratic written language in this context for talking with other
deaf people. Deaf people who use sign language as first language have
problems with the use of articles, conjunctions, prepositions and 
grammar of
oral language in general (Moores, 1997) because the grammar, at least in
Spanish sign language, is completely different to oral language. 
However, on
some occasions, problems come when users need to "talk" with a machine 
(e.g.
web search engine). Some interaction systems can not understand this
idiosyncratic language.

In fact, in these cases, one interesting option is the use of an 
alternative
way of communication with the systems, for example, by means of video
technology and computer vision techniques which capture and interpret 
sign
language.  I'm not very familiarized with the interesting work in this 
area,
like those cited by Guido Gybels. However, some colleagues of my 
University
(Research Group of Computer Vision) are currently working on Spanish 
sign
language alphabet recognition system using PCA algorithms. Besides, 
they are
working in facial expression and body position recognition. Their future
project is to integrate these 3 recognition systems for supporting the 
deaf
signers communication. The frame size that will use such systems is 
640x480.
These systems will work in real time (25 fps). The  final objective is 
to
get that the three systems work in parallel in a grid computing system 
(8
PCs). This solution could overcome the temporal resolution problem (real
time recognition).

looking forward to hearing your feedback!

inma

Inmaculada Fajardo Bravo
______________________________________________
Laboratory of HCI for Special Needs. ATC Department
Computer Science Faculty
Manuel Lardizabal Pasealekua 1, E-20018 Donostia
Tel: + 34 943015113, Fax: + 34 943219306, E-mail: acbfabri@si.ehu.es
http://www.ugr.es/~ergocogn

Received on Monday, 12 April 2004 23:24:31 UTC