[navigation] premise: client software affords context prompting from labels of ancestors.

In all this discussion, I am assuming one thing.  I think that PFWG is also
assuming this.  This post exposes this assumption to see if others think
it is sound.

This is that labels that appear in the context of (up the ancestor
chain from) a current node in the tree (focus point or reading point)
are actually used in assistive presentation as relevant to answering
the two cardinal accessibility questions:

a) Where am I?
b) What is _there_?

The idea is that the processor with hands on of the final user experience
will either a) announce context as the user exercises navigation other
than "just play it" reading, or at least b) the user can at any time query
the UI and get the context info, as with the 'q' command in Fire Vox.

My impression is that such queries are not unique to Fire Vox but a reasonably
common practice in screen readers.  Could it be they are sometimes called
'inspect'?

What do people think of this allocation of responsibility between the author
and the player?

Al

Received on Thursday, 17 May 2007 18:19:24 UTC