- From: Michael Livesey <mike.j.livesey@gmail.com>
- Date: Mon, 5 Aug 2024 08:15:48 +0100
- To: Ms J <ms.jflz.woop@gmail.com>
- Cc: Taliesin Smith <talilief@gmail.com>, "w3c-wai-ig@w3.org" <w3c-wai-ig@w3.org>
- Message-ID: <CAJOTQELLyuBstO8wqQOPFtpKVmE0HbmZA5OURCipeydPMPPPVA@mail.gmail.com>
As per the GitHub discussion on 1.3.3, this SC does not prohibit the use of sensory information (colours, graphics etc) to convey information. It prohibits the additional textural description of that control being based purely on its sensory attributes. 1.3.3 also does not require that we add additional textural description. This causes the peculiar situation that you can use purely colours and graphics in controls, conveying information. If you don't describe it it passes, but if you describe it in additional instructions it fails (unless you describe it in non-sensory ways). If you read the discussion on GitHub, it was noted that there may have been a deliberate attempt to make 1.3.3 vague so as to dissuade people from using sensory information. You are correct, Adam, the hamburger menu is a good example. As above, hamburger menu icon is 100% permitted. But 1.3.3 would prohibit a textural instruction e.g. "please click the icon with the three lines in the top left to open the menu". Removing the instruction make it a pass. On Sunday, August 4, 2024, Ms J <ms.jflz.woop@gmail.com> wrote: > Hi Taliesin > If the pulsing was just decorative, I would agree. However, the animation is clearly conveying information in this case. That much is non-ambiguous, so my question is based on taking this fact as a given. The whole visual experience is designed to lead the users through the example in order. > The rest of the points you have noted are absolutely other things that need to be checked wrt a button. But here, I'm specifically talking about the case where animation is used very deliberately to convey the way to proceed through the service, and this is something a screen reader user isn't getting. > Thanks > Sarah > Sent from Outlook for iOS > ________________________________ > From: Taliesin Smith <talilief@gmail.com> > Sent: Sunday, August 4, 2024 2:50:14 PM > To: Ms J <ms.jflz.woop@gmail.com> > Cc: w3c-wai-ig@w3.org <w3c-wai-ig@w3.org> > Subject: Re: Animation conveying information > > Hi Sarah., > I am a designer of description for interactive simulations, not a tester. > My concern as a designer would be, does a screen reader user have access to the information needed to perceive, operate, and understand the button, in an efficient and enjoyable way. That is: > - Does the button have an accessible name? > - Does the button receive a high contrast outline or highlight on keyboard focus? > - Is the button a native html button and if not is the role of button implemented properly? > - Is there help text near by in a modality accessible to someone with BLV, e.g. text or sound or even haptic feedback that indicates that it is likely urgent to press the button if one wants to proceed with the web experience? > - And finally, is the change in context when the button is pressed appropriately communicated, again in a modality accessible to someone with BLV? > The fact that the button is pulsing is not super important unless it is essential to know that it is pulsing to be able to enjoy the experience. > I come up against issues around equitable access to information all the time in the work I do - i.e. making highly interactive simulations (i.e. very dynamic) about STEM concepts accessible and inclusive to all learners. > At PhET we have developed Description Design Framework (Smith and Moore, 2020) that modularizes descriptions into State Descriptions and Responsive Descriptions which are further broken down into Static and Dynamic State Descriptions and Object and Context Responses. The sims for which we have designed descriptions provide an interactive described experience that provides constant access to an accurate description of the current state (on demand) and the object and context responses capture relevant changes happing to objects and context in the moment and in response to interaction as a learner interacts and makes changes. > While using the framework I can convey a lot of detail that becomes part of the experience, it never includes every visual detail - only the information relevant for successful interaction, only the information needed for sense making, and often a little extra detail when needed to make things fun and enjoyable. > I am not convinced that the animation itself needs to be made accessible to screen reader users as long as the interaction itself is accessible with a screen reader. I would be more concerned about people who can see but who could be confused or distracted by the animation. > I hope that is helpful information, > Taliesin > > ~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~ > Taliesin L. Smith > talilief@gmail.com > taliesin.smith@colorado.edu > > Inclusive Design Research Specialist > PhET Interactive Simulations > http://phet.colorado.edu/ > Department of Physics > University of Colorado, Boulder > > > > > > > On Aug 2, 2024, at 13:40, Ms J <ms.jflz.woop@gmail.com> wrote: > Hello > This is maybe niche and something I haven't seen before. I have a website which uses animation to convey information. For example, to show that you must click a button next, the button pulses. This is information conveyed by movement. It isn't really time based media and it isn't really non-text content... it is animation/movement used to convey instruction. > What SC would you align this with please? It is almost sensory characteristics but the instruction is implicit in the animation... > It needs an alternative for people who can't perceive the animation, would this be 1.1.1? Or 1.2.1? > Thanks > Sarah > Sent from Outlook for iOS >
Received on Monday, 5 August 2024 07:15:53 UTC