- From: Charles McCathieNevile <charles@w3.org>
- Date: Mon, 21 Dec 1998 15:26:59 -0500 (EST)
- To: Jason White <jasonw@ariel.ucs.unimelb.EDU.AU>
- cc: w3c-wai-gl@w3.org
The point of my comment was that it is possible to automatically translate
text into signs. It is possible to translate text into visual symbols
which are easily understood.
Concentrating on the latter approach for a moment:
Graphic representations of language tend to have small vocabularies. It
does not usually make sense to attempt automated translation between text
and graphics - that is, like machine translation of English to Japanese,
notoriously difficult. But it does make sense to ask that users of graphic
languages which cannot be translated easily provide a text version of what
they are 'saying'. This will provide the mechanics of communication
between, for example, a blind person and a person who can only or best
understand a vocabulary made of visual symbols.
It then remains for the people in question to discover their shared
vocabulary. However, without the mechanical steps having been made there
is a very large communication barrier that precludes the process.
Charles
--Charles McCathieNevile - mailto:charles@w3.org
phone: * +1 (617) 258 0992 * http://purl.oclc.org/net/charles
**** new phone number ***
W3C Web Accessibility Initiative - http://www.w3.org/WAI
545 Technology sq., Cambridge MA, USA
On Mon, 21 Dec 1998, Jason White wrote:
In response to Charles' comment, I could rather unhelpfully suggest
that a full transcript of all textual material be provided in an
appropriate sign language as a video track or using any symbolic
representation that is ultimately developed. The difficulty here is
that for a given written or spoken language, there may be more than
one corresponding sign language (E.G. the Australian sign language or
the American sign language). I don't know to what extent the
transcription could be automated.
Received on Monday, 21 December 1998 15:27:08 UTC