Re: Highlighting areas of text on mouse click.

At 08:38 AM 2002-12-08, Jonathan Chetwynd wrote:

>well there certainly is a case for screen readers (and browsers) to 
>optionally highlight the word being read.

Yes this is a well-known assistive mode.

>not sure any do though

Here we come to the fact that aftermarket assistive technology is special
stuff for special needs and is need-specific; not ususally universal.

What you are getting at is requirements for a screen reader targeted to SLD
users.

On the 'highlighting the region' issue: yes there is a use for this.  In
fact, you want it to be able to ascend the tree (retrace the breadcrumb
trail) of part-whole relationships.  Highlight the smallest recognized
element you clicked in or have moused over.  Query for its parent and the
enclosing element scope is indicated, etc.  This would be very useful in
auditing the svg:g tree in a scene.  But also for teaching topic sentence,
topic paragraph, section title, etc.

To do that sort of thing,

a) the rhetorical notions you want to teach have to be machinably mapped to
the syntax in use

(XAG Guidelines 2, 4.  http://www.w3.org/TR/xag)

b) some sort of [preferably distinguished] screen highlight must be
available for programmatic control from the tutorware add-on (alias
assistive technology).

c) it still takes a modular extension to interpret the syntax tree (the DOM
tree) in the light of the rhetorical graph-grammar and follow XPaths through
that structure to identify the elements that fill rhetorical relationship
roles of interest in the current lesson.

The long-awaited 'views' module in the DOM would address this.  Note that
you are not here talking about what the UAAG covers, which deals with the
selection and focus comprehended by the application.  You may need an added
layer of inspection selection just as the screen readers all have an added
layer of inspection cursor in addition to the cursor that the OS and Ap know
about.  Then the tutorware AT would find the related element by XPath
following and indicate it to the user by DOM directed state manipulations
reflected in highlight effects on the screen.

I think that there may be some mass market utility for context highlighting
on user event, if linked to (triggered by) the context menu query.
Particularly for an extended context query which gives a breadcrumb trail
(part/whole ancestry trace) by way of orientation before it lists action
opportunities.  It would make all kinds of sense to indicate in the graphic
presentation the scope that corresponds to the least scope mentioned in the
breadcrumb trail, which give the bounds of 'here' in the "you are here"
description.

I know plenty of people who could stand to have this sort of a query to
assist them in decoding nested windows.

The application to SLD, say in teaching the words 'brother,' 'cousin' and
the like with the aid of a pictorial family tree (of the learner's own
family) is obvious.

For the non-SLD, teaching the rules for what courts hear what cases in what
stages of the appeals process would use a similar relationally-annotated graph.

A research topic with regard to this sort of structural navigation is to
investigate whether it saves time for those with high input symbol cost
(single switch, voice command, etc.) over the existing mouseGrid search.
What if they had the option of saying 'up' and seeing the enclosing
structure highlighted, and 'next' when they had reached the level at which
they intended to cruise?  Coupled with the annotated context menu proposal
this could be a general function of the voice command UI.  'Menu' gets you
the context menu.  Configuration says whether the breadcrumb trail is
displayed at this time or only after saying 'trail'.  But then one can speak
items in the active menu.

The trick is to get the techies pushing this technology to partner with your
team as an evaluation site.

Is there anyone here who could read over the recently published

Multimodal Interaction Use Cases
http://www.w3.org/TR/2002/NOTE-mmi-use-cases-20021204/

and see if their collection of cases covers these needs or not?

Al

Recurring principles exposed here (40X points):

In the try-harder escalation sequence[1] to make an interface usable under 
adverse conditions, the first set of functions to add to a browser are 
those common in an editor; if there is nothing there that does the trick, 
the next place to look is the functions common in a debugger.

[1]  For the "try-harder escalation sequence" see for example
http://trace.wisc.edu/docs/modality_translation_on_demand/modality_translation.htm


>Jonathan

Received on Sunday, 8 December 2002 10:28:57 UTC