- From: Bryan Garaventa <bryan.garaventa@ssbbartgroup.com>
- Date: Wed, 12 Nov 2014 18:52:04 +0000
- To: Matthew King <mattking@us.ibm.com>
- CC: 'W3C WAI Protocols & Formats' <public-pfwg@w3.org>
- Message-ID: <0961e5390ca949aba5e1d3f40a9eb3bf@BY2PR03MB347.namprd03.prod.outlook.com>
Thank you, I understand your points and agree that some of these things could be done more reliably by ATs like screen readers by leveraging the APIs better. The problem though with having region labels announced automatically when form fields receive focus, is that the deduction made by the AT is too subjective and can’t ever be reliable as a result. The first problem is nesting. If the first parent region was properly labeled with the step name, then yes, when the form field received focus, it could be easily deduced and announced in addition to the label. However, if the parent of the form field is a form region with an explicit label, and the next parent region is labelled with that particular form group name, and the next parent region is labeled with the step name for the wizard process, then where does the AT know to draw the line? The only alternative is to just use the first level parent and ignore all others above, or to traverse all the way up the tree and concatenate every region + label and announce the whole string when the form field receives focus, which could get extremely annoying and overly verbose. The other issue with relying on region labels for this purpose, is that ATs would then need to identify the first form field in the region, so that only that field would announce all of this extra data. Otherwise, every form field in any named region would have all of that region label data announced at the same time as the form field name, attached description if present, fieldset/legend if present, as well as its role and state and value, every time the user presses tab or shift+tab to navigate, which would be exceedingly verbose as well. My proposal to use aria-describedby is much simpler, and gives the power of announcement both to the developer and to the screen reader user at the same time. For instance, the developer can choose in advance what they want to be announced when the field receives focus, and dynamically update that accordingly, and the screen reader user (using JAWS for example) can disable tutor messages to ignore the announcement of aria-describedby attached text if they wish to. Personally, I’m much more in favor of customizing what is announced as needed so as not to overwhelm users than to apply a blanket wide approach that may cause serious issues for navigation in complex UI designs. Smarter ATs would be a good bonus at the same time though, I agree totally. From: Matthew King [mailto:mattking@us.ibm.com] Sent: Wednesday, November 12, 2014 3:31 AM To: Bryan Garaventa Subject: RE: Should user agents be expected to expose the presence of an aria-current descendant? Bryan ... I think it is not so complicated. > Joanie’s example of the Amazon checkout process is a good one for > this, which includes the step information buried within about a > hundred header navigation links, and there is no declarative > programmatic means for an AT to automatically parse and pick out the > relevant text within all of that chaff. There is if 1) main is the wizard and 2) the current step labels main. But, for years, Amazon checkout has been one clean page with all the steps on one page and nothing else there ... at least if you have prime and have saved all your info. > Also, when a full page refresh occurs, the whole page is typically > parsed by the AT, not just the new content. > Yes, but when focus is placed in an input after load, it's a no-brainer that the screen reader should tell the user where the focus is. JAWS is really good about a lot of things, but controlling the audio and speech experience during a page load is not one of them. > Alternately, when dynamic changes occur, such as within a > dynamically displayed wizard, the step information is just part of > the textual information, and the labelling mechanism whether this is > included within a heading or region name, is not reliably conveyed I think browsers are very reliable at conveying this information, assuming author gave the region a good label, e.g., aria-labelledby pointing to the step name. And, there is no reason why the screen reader can not be as reliable, even on a busy page. > It would be nice if ATs were smart enough to do all of these things > automatically, but until we have AI built into ATs, I think we are > going to have a long wait. I am not seeking rocket science ... I just want screen readers to start leveraging ARIA in some very simple ways that would use ARIA to close some of the productivity gaps it is capable of closing, e.g., the simple glance mentioned by Joanie. It is really not so hard. I could spec everything I described in a day or so ... guess I should find time to do that and see if I can find some takers. It seems to me that the screen reader vendors have mostly pasted ARIA on top of there forever way of doing things instead of really thinking about how it could simplify the experiences they are creating. In their defense, however, adoption of ARIA is still pretty immature, and that makes it much harder to see the possibilities. Matt King IBM Senior Technical Staff Member I/T Chief Accessibility Strategist IBM BT/CIO - Global Workforce and Web Process Enablement Phone: (503) 578-2329, Tie line: 731-7398 mattking@us.ibm.com<mailto:mattking@us.ibm.com> Bryan Garaventa <bryan.garaventa@ssbbartgroup.com<mailto:bryan.garaventa@ssbbartgroup.com>> wrote on 11/11/2014 05:08:46 PM: > From: Bryan Garaventa <bryan.garaventa@ssbbartgroup.com<mailto:bryan.garaventa@ssbbartgroup.com>> > To: Matthew King/Fishkill/IBM@IBMUS, > Cc: Joanmarie Diggs <jdiggs@igalia.com<mailto:jdiggs@igalia.com>>, "LWatson@PacielloGroup.com<mailto:LWatson@PacielloGroup.com>" > <LWatson@PacielloGroup.com<mailto:LWatson@PacielloGroup.com>>, "'W3C WAI Protocols & Formats'" > <public-pfwg@w3.org<mailto:public-pfwg@w3.org>> > Date: 11/11/2014 05:09 PM > Subject: RE: Should user agents be expected to expose the presence > of an aria-current descendant? > > Granted, but it’s rarely that simple. > > Joanie’s example of the Amazon checkout process is a good one for > this, which includes the step information buried within about a > hundred header navigation links, and there is no declarative > programmatic means for an AT to automatically parse and pick out the > relevant text within all of that chaff. > > Also, when a full page refresh occurs, the whole page is typically > parsed by the AT, not just the new content. > > Alternately, when dynamic changes occur, such as within a > dynamically displayed wizard, the step information is just part of > the textual information, and the labelling mechanism whether this is > included within a heading or region name, is not reliably conveyed > automatically, and may not even be present within the same top level > container element as the rest of the wizard content. > > It would be nice if ATs were smart enough to do all of these things > automatically, but until we have AI built into ATs, I think we are > going to have a long wait. > > > From: Matthew King [mailto:mattking@us.ibm.com] > Sent: Tuesday, November 11, 2014 4:40 PM > To: Bryan Garaventa > Cc: Joanmarie Diggs; LWatson@PacielloGroup.com<mailto:LWatson@PacielloGroup.com>; 'W3C WAI Protocols & Formats' > Subject: RE: Should user agents be expected to expose the presence > of an aria-current descendant? > > > No problem, this can easily be done by adding aria-describedby on > > the form field that focus is set to, so that the step is > > automatically announced at the same time as the form field label. > > Then, the developer can optionally remove the aria-describedby > > attribute or set it to null so that this only happens once when it > > receives focus. > > That would definitely work, but seems a bit complex to me. I don't > think the developer should have to get so fancy given that all the > needed info could be easily extracted and used by a screen reader > that has learned how to use ARIA effectively. There is a lot of > room, a lot of room, for improvement in how intelligently screen > readers exploit ARIA. > > Assuming only the following: > > 1. Step 2 is in the main content > 2. Main content is labeled as step 2 > 3. Focus moved from step 1 to the input in step 2 either via a page > load or dynamic update to main content in response to a user action. > > The screen reader should have enough smarts to let the user know > where the focus is and what has changed. It is up to the page author > to stay out of the way and not throw in a bunch of extraneous events > or other information that will stomp all over smart interpretation > of the UI by the screen reader. > > Note, for this use case, there is no dependency on aria-current. > > Speaking of the "where am I glance" by a sighted user, none of the > screen readers today have a good "where am I" function. Even the > ancient Screen Reader 2 for OS/2, the first GUI screen reader, had a > better "where am I" feature than anything out there now. It takes > multiple and sometimes esoteric commands to learn much about where > you are in most screen readers. There at least a half dozen commands > in each screen reader that provide some form of "where am I" > information, and it takes quite an astute user to put it all together. > > Just in this example, in nearly every screen reader, it takes one > command to learn the window title and application that has focus, > another for the main content or page title (which should match the > window title in a browser but may be different), another set of > commands to decipher the current place with in the landmark region > hierarchy, potentially another for the step in the wizard, and > another for the label and contents of the input. And, if there were > an ARIA description, or if labelledby is used, and if the user > wishes to parse that information, even more gynmastics are required > . The problem gets much worse if the user's focus is buried in a > tree or complex grid. > > This is all done with an easy glance by a sighted person. ARIA > markup and practices make a very rich, efficient, and easily > understood screen reader "where am I" function very feasible. There > should be an easy way in all screen readers for the user to make an > intelligent glance that is context-based (ie it takes into > consideration the application, role, state, relevant properties, and > the current state of the UI). The screen reader should assemble it > in an easily understood and easily reviewed manner. ... At least > that would be one of my priorities if I were designing improvements > for current screen readers. > > Matt King > IBM Senior Technical Staff Member > I/T Chief Accessibility Strategist > IBM BT/CIO - Global Workforce and Web Process Enablement > Phone: (503) 578-2329, Tie line: 731-7398 > mattking@us.ibm.com<mailto:mattking@us.ibm.com> > > > > From: Bryan Garaventa <bryan.garaventa@ssbbartgroup.com<mailto:bryan.garaventa@ssbbartgroup.com>> > To: Joanmarie Diggs <jdiggs@igalia.com<mailto:jdiggs@igalia.com>>, "LWatson@PacielloGroup.com<mailto:LWatson@PacielloGroup.com>" < > LWatson@PacielloGroup.com<mailto:LWatson@PacielloGroup.com>>, "'W3C WAI Protocols & Formats'" <public- > pfwg@w3.org<mailto:pfwg@w3.org>>, > Date: 11/11/2014 11:17 AM > Subject: RE: Should user agents be expected to expose the > presence of an aria-current descendant? > > > > > >In my example, when the step changes, focus is not set to the > container; focus is automatically set to the first input field > (Address) of the new step > >(Step 2. Billing Information). Sighted users see what step they're > in by glancing above the form fields (probably, anyway. It might be in a > >sidebar.) For a user who is blind to accomplish the same thing, > that user has to leave the focused form field and go looking for > that progress indicator > >non-visually and then return to that form field to fill it out. > Wouldn't it be nice(r) if the screen reader could do the glancing up > for the end user so > >that user can remain in the focused field and immediately fill it > out because his/her screen reader automatically announced "Step 2. > Billing information"? > > No problem, this can easily be done by adding aria-describedby on > the form field that focus is set to, so that the step is > automatically announced at the same time as the form field label. > Then, the developer can optionally remove the aria-describedby > attribute or set it to null so that this only happens once when it > receives focus. > > > -----Original Message----- > From: Joanmarie Diggs [mailto:jdiggs@igalia.com] > Sent: Monday, November 10, 2014 4:21 PM > To: Bryan Garaventa; LWatson@PacielloGroup.com<mailto:LWatson@PacielloGroup.com>; 'W3C WAI Protocols & Formats' > Subject: Re: Should user agents be expected to expose the presence > of an aria-current descendant? > > Hi Bryan. > > On 11/10/2014 06:10 PM, Bryan Garaventa wrote: > > [...] > > > and there is no need to convey this unless the element is > encountered during navigation by the user. > > If there's no expectation that the step will be presented unless > navigated to, that indeed makes my life much easier. Though I think > it potentially makes the value of aria-current less powerful. In my > experience the current step in a process (filling out a form, tracking a > package) are things you won't encounter unless you start from the > top of the page and work your way systematically down to the stuff > you want to interact with. Which brings me to: > > > So in your example, when the step changes and focus is set to the > > container, > > In my example, when the step changes, focus is not set to the > container; focus is automatically set to the first input field > (Address) of the new step (Step 2. Billing Information). Sighted > users see what step they're in by glancing above the form fields > (probably, anyway. It might be in a > sidebar.) For a user who is blind to accomplish the same thing, that > user has to leave the focused form field and go looking for that > progress indicator non-visually and then return to that form field > to fill it out. Wouldn't it be nice(r) if the screen reader could do > the glancing up for the end user so that user can remain in the > focused field and immediately fill it out because his/her screen > reader automatically announced "Step 2. Billing information"? > > --joanie
Received on Wednesday, 12 November 2014 18:52:39 UTC