- From: Andi Snow-Weaver <andisnow@us.ibm.com>
- Date: Wed, 12 Oct 2005 11:09:05 -0500
- To: Christophe Strobbe <christophe.strobbe@esat.kuleuven.be>
- Cc: public-wcag-teamc@w3.org
Christophe, With regard to your feedback on our guide doc [1] <cs> VoiceXML applications don't describe errors in "text" but in speech (preferably synthesized speech so that there is text available somewhere if there should be a system for deaf users to access VoiceXML applications). Maybe the SC should read: "If an input error is detected, the error is identified and described to the user in the same modality that is used for labels, prompts and other guidance in the form/interaction." Writing VoiceXML techniques for this SC should be easy (VoiceXML has noinput , nomatch and reprompt elements) but is low priority; they could go to the "boneyard" for GL 2.5. </cs> If the VoiceXML application is using synthesized speech, then the errors are described in text aren't they? They just happen to be rendered by a speech synthesis engine. And if the application is using recorded speech for errors, then it would have to provide a text alternative in order to meet GL 1.1. In either case, if the application is being rendered by an AT for deaf users, the text is there isn't it? So is this really a problem? If you still think it's a problem, we thought your suggested wording was a little verbose and came up with an alternative suggestion. We could propose changing the SC to "If an input error is detected, the error is identified and described to the user." Would this resolve the issue? [1] http://www.w3.org/2002/09/wbs/35422/teamc-2/results Andi andisnow@us.ibm.com IBM Accessibility Center (512) 838-9903, http://www.ibm.com/able Internal Tie Line 678-9903, http://w3.austin.ibm.com/~snsinfo
Received on Wednesday, 12 October 2005 16:11:50 UTC