- From: Andi Snow-Weaver <andisnow@us.ibm.com>
- Date: Wed, 12 Oct 2005 13:08:36 -0500
- To: Christophe Strobbe <christophe.strobbe@esat.kuleuven.be>
- Cc: public-wcag-teamc@w3.org
Thanks Christophe. I really would like to remove that phrase "in text" because I think you should be able to do it any way that is accessible (i.e. meets all applicable guidelines). If you use non-text content, then you could simply provide a text alternative. But the text alternative would also meet this success criterion so maybe it doesn't matter and is not worth the effort it would take to get consensus on this. Andi Christophe Strobbe <christophe.strob To be@esat.kuleuven. Andi Snow-Weaver/Austin/IBM@IBMUS be> cc 10/12/2005 11:55 Subject AM Re: Your comments on GL 2.5 Level 2 SC 1 guide doc Hi Andi, At 18:09 12/10/2005, you wrote: >Christophe, > >With regard to your feedback on our guide doc [1] > ><cs> >VoiceXML applications don't describe errors in "text" but in speech >(preferably synthesized speech so that there is text available somewhere if >there should be a system for deaf users to access VoiceXML applications). >Maybe the SC should read: "If an input error is detected, the error is >identified and described to the user in the same modality that is used for >labels, prompts and other guidance in the form/interaction." >Writing VoiceXML techniques for this SC should be easy (VoiceXML has >noinput , nomatch and reprompt elements) but is low priority; they could go >to the "boneyard" for GL 2.5. ></cs> > >If the VoiceXML application is using synthesized speech, then the errors >are described in text aren't they? They just happen to be rendered by a >speech synthesis engine. Yes, but the errors are not sent as characters. If we allow synthesised speech as text, how does this SC disallow the live generation of images containing the text of the error? Of course, GL 1.1 kicks in for both cases. >And if the application is using recorded speech >for errors, then it would have to provide a text alternative in order to >meet GL 1.1. In either case, if the application is being rendered by an AT >for deaf users, the text is there isn't it? Yes (although I'm not aware of the existence of such applications). >So is this really a problem? Probably not. Now that you make me think a little bit harder, I see another way around it: GL 4.2 L1 SC1. Providing textual interaction for deaf users as an alternative to synthesised speech is really providing an alternate form. Whichever solution we choose (treat synthesised speech as text or refer to GL 4.2 L1 SC1), it is worthwhile recording it. I'll add it to the document on WCAG & VoiceXML, which will be discussed by the VoiceXML Forum Accessibility Committee. >If you still think it's a problem, we thought your suggested wording was a >little verbose and came up with an alternative suggestion. We could propose >changing the SC to "If an input error is detected, the error is identified >and described to the user." Would this resolve the issue? I'm a little bit surprised about the removal of "in text" because it allows the description of errors with non-text content. The rewording is wider than the original text but I think it's OK because GL 1.1 kicks in for non-text content. Bottom line: any of the solutions you propose is valid. If you'd rather not change the GL 2.5 success criterion, that's OK. Regards, Christophe Strobbe >[1] http://www.w3.org/2002/09/wbs/35422/teamc-2/results > >Andi >andisnow@us.ibm.com >IBM Accessibility Center >(512) 838-9903, http://www.ibm.com/able >Internal Tie Line 678-9903, http://w3.austin.ibm.com/~snsinfo -- Christophe Strobbe K.U.Leuven - Departement of Electrical Engineering - Research Group on Document Architectures Kasteelpark Arenberg 10 - 3001 Leuven-Heverlee - BELGIUM tel: +32 16 32 85 51 http://www.docarch.be/ Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
Received on Wednesday, 12 October 2005 18:09:04 UTC