- From: Charles McCathieNevile <charles@w3.org>
- Date: Tue, 23 Feb 1999 15:02:03 -0500 (EST)
- To: Charles Oppermann <chuckop@microsoft.com>
- cc: w3c-wai-ua@w3.org
Actually there are two threads running here. I will leave the documentation one aside. My learned colleague, Charles from Redmond, is right about the two issues. One is about native implementation, and there are various ways of doing it. CSS2 provides a very nice mechanism for native implemntation of many of these features. The other is about programmatic interface - exposing the focus, and the Point of Regard (which is the term used within the guidlines for 'area of current interest', or 'locus'.) From memory, this is one of the things which is not in DOM level 1, but is likely to be included in DOM level 2 (who are expected to produce their recommendation later in the year.) If the UA group asked the PF group to look seriously into requiring these things, it may increase the prioirity somewhat. In any case, it seems that we need checkpoints to expose the selection, focus, point of regard, to the user, and to assistive technology. Lo and behold, guideline 6.1 covers this very question, in considerable detail. In the process of reviewing which features need to be implemented by which classes of User Agent, I suspect we will come to Guideline 6.1 in the very near future, and can keep this discussion uppermost in our minds. Charles McCathieNevile On Tue, 23 Feb 1999, Charles Oppermann wrote: Although it is not obvious, based on my meeting this week with Microsoft, we should have another checkpoint: The user agent should expose the keyboard location to allow keyboard emulators equivalent control over keyboard input. There are really two issues here: 1) Visual indication of keyboard input (focus) and current selection. The visibility is important for print disabilities including low vision. 2) Programmatic indication of focus and/or selection. Important for accessibility aids to discover the current area of user interest (sometimes known as the "Locus"). -----Original Message----- From: Denis Anson [mailto:danson@miseri.edu] Sent: Saturday, February 20, 1999 2:27 PM To: Kitch Barnicle; w3c-wai-ua@w3.org Subject: Re: comments on section 4 > >Section 4.3 Support accessible keyboard input > Although it is not obvious, based on my meeting this week with Microsoft, we should have another checkpoint: The user agent should expose the keyboard location to allow keyboard emulators equivalent control over keyboard input. This is not automatic in software. > > >4.3.1 [Priority 2] > Allow the user to configure keyboard access to user agent >functionalities. Configuration includes the ability to specify single as >well as multi-key access. > > >4.3.2 [Priority 2] > Ensure that user can find out about all keyboard bindings. >4.3.4 [Priority 3] > Display keyboard bindings in menus. > >KB: We discussed on the telecon that checkpoint 4.3.4 is covered by >checkpoint 4.3.2. > > >4.4.12 [Priority 1] > Allow the user to turn on and off support for spawned windows. > >KB: I know that spawned windows are a problem but I am not sure if it is a >priority 1 problem. What do people think? Is it important to let the user >turn off this feature or should the user agent just make sure that the user >is notified when a new window is spawned? > > I agree that this should probably be Priority 2. Spawned windows may confuse some groups, but in general, confusion does not preclude access, merely makes it difficult. Denis Anson --Charles McCathieNevile mailto:charles@w3.org phone: +1 617 258 0992 http://purl.oclc.org/net/charles W3C Web Accessibility Initiative http://www.w3.org/WAI MIT/LCS - 545 Technology sq., Cambridge MA, 02139, USA
Received on Tuesday, 23 February 1999 15:02:06 UTC