Re: [navigation] premise: client software affords context prompting from labels of ancestors.

Hi al,

ATs have been playing on this aassumption for quite ssome time now.   
IIInfact, they will go so far as to repair something that is missing  
or incorrectly coded in order to optimize the user experience or so  
they hope.  This also is part of what configurability is based on.   
for instance and this may not be the best example, I can have  
displayed <spoken> the uri of an image, its name or the surrounding  
text or whatever is longest or nothing or "graphic" using jaws.  This  
may be mixing things a bit but it should provide the info sought.   
I'll have to try out the q ccommand in firevox.

On May 17, 2007, at 2:19 PM, Al Gilman wrote:

>
>
> In all this discussion, I am assuming one thing.  I think that PFWG  
> is also
> assuming this.  This post exposes this assumption to see if others  
> think
> it is sound.
>
> This is that labels that appear in the context of (up the ancestor
> chain from) a current node in the tree (focus point or reading point)
> are actually used in assistive presentation as relevant to answering
> the two cardinal accessibility questions:
>
> a) Where am I?
> b) What is _there_?
>
> The idea is that the processor with hands on of the final user  
> experience
> will either a) announce context as the user exercises navigation other
> than "just play it" reading, or at least b) the user can at any  
> time query
> the UI and get the context info, as with the 'q' command in Fire Vox.
>
> My impression is that such queries are not unique to Fire Vox but a  
> reasonably
> common practice in screen readers.  Could it be they are sometimes  
> called
> 'inspect'?
>
> What do people think of this allocation of responsibility between  
> the author
> and the player?
>
> Al
>

Received on Thursday, 17 May 2007 20:52:29 UTC