FYI use case for semantic clarifications in Voice Browser Aps [was: SKOS? and auto-translation ]

* background:

I was discussing with Alistair Miles of the Sematic Web -- the
people who bring us SKOS, the potential use of the terms in
that lexicon in meeting the objectives of WCAG2, Success
Criterion 3.1.3.

http://www.w3.org/WAI/GL/WCAG20/guidelines.html#meaning

Alistair raised the entirely valid point that a thesaurus in SKOS
does not constitute a sufficient knowledge base for auto-translation
of free text.

I spun out a use case to explain to him that there were niches in the
support of people with disabilities where, with a little [e.g. SKOS]
help, automatic translation might have a real role to play. Since the
example bears on the questions of "PLS and meaning" I wanted to share
this here as grist for our discussions on Thursday and through the
Last Call.

Use case:

Let me isolate a disability use case where annotating the intended
sense of selected terms (by exception, following the WCAG 2 success
critierion -- the sense is not the dominant sense of that term or is
exotic, uncommon and likely to be unknown) is likely to make a
significant improvement in auto-translation, and the auto-translation
is likely to fill a market niche.

This has to do with the auto-translation of SRGS grammars that define
the catch-phrase structure for voice input in VoiceXML applications,
translating these to to a sign-language gesture grammar for use in a
gesture-recognition input module. This would be used in an adaptive
binding of the VoiceXML application for use by those who are Deaf,
communicate in sign language as their first language, and have speech
that the speech recognizer does not perform well on.

Just as many Deaf people today carry text message enabled mobile
devices, and a few are busily engaging in video chat where they can
get broadband connections to the Internet, the sign language or
culturally-Deaf community will be likely to take to the
gesture-recognition-enabled mobile devices emerging on the market.
But still sign is a different natural language and requires
translation.

The [SKOS or similar] markup on terms in the voice grammar is not
used to provide all the knowledge used in this translation. It just
cues the sense to be translated when the sense is not what the
translation software would be likely to assume.

Just as the use of SSML in the production of audio books by RNIB
needs the Pronunciation Lexicon Specification to work around
pronunciation errors in omnibus text-to speech algorithms, so
annotations as to sense would give us the means to work around and
touch up meaning errors in translations of speech input grammars to
sign.

The terms that need to be clarified can be identified without having
the sign-language translation program to work with, though. Basic
semiotic and language-control statistics can tell us when a sense is
not obvious and should be made explicit in the markup.

The reason that there is a market opportunity for auto-translation
here, is that the Deaf, like the Hakka in China and the Romany in
Europe, are a minority everywhere.  And like users with disabilities
everywhere, they have a need for a functional user experience
even where a comparable user experience would not be competitive
in the market for the attention of the Temporarily Able Bodied.


Al

Received on Friday, 24 February 2006 20:25:25 UTC