Re: visual and auditory navigation: examples needed

[I have moved the Cc: address to www-archive rather than w3c-wai-gl for this post.  It bears importanly on future technology that will let us create more gracefully-transforming user experiences in the future.  But not so much on technology that can be put in practice by single-seat webmasters at this time.]

Examples:  

Are you looking for experience examples of how it should work, or coded examples of how to get it to work that way?

* Experinece examples:  

- Teaching machines for special needs.  I think that there are software packages from Prentke-Romich or somebody like that, so you don't have to get a whole $$$ workstation.

- Dave Bolnik did a multimedia example that did word highlighting in sync to the speech that he showed at CSUN a couple years back.  This however is a canned presentation mechanically following a programmed timeline.  Not an on-the-fly derived timeline based on interaction with a user.  This was done with SAMI.  If I understand the concept of the SMART technology proposition, it is to create an industry-consensus SAMI workalike, in rough terms.

which was a derivative of a multimedia canned presentation example for the Holocaust Museum in DC.

* Code examples:

- Charles can give us a steer in terms of how to do it in SVG+ACSS

- Are you familiar with the proposals of the SALT forum at www.saltforum.org ?

At 08:59 AM 2002-03-31 , jonathan chetwynd wrote:
>read on action is:
>
>read on mouse over, read on tab or other action.
>
>I'll be looking into all this over the coming months, but really believe 
>some good examples are needed,as I'm very unclear about what is possible.
>
>Why this is SO important is that many users have multiple impairments.
>

This is not going to help Jonathan at the moment, but this problem is an excellent one to pose to people as a topic to research.

Lots of objects in the user interface scene are related to, or sensitive to lots of event types in one way or another.  This is a many-to-many relation.  The user interface events are by definition asynchronous and exogenous.  But speaking has to be well ordered to be comprehensible.  A simple "speak on any related event" rule would create a total hash.

Blending the UI-event defined "what's happening" into priorities for what needs to be said, and then flowing that into a speech program [voice-over script] is a job for constraint language programming, a graph-pattern-applicative engine, and would provide a high-function model for the autonomous-process interoperation protocol that it closest to the physical user interface.

Eventually event capture has got to go so actors present in the background of the scene cannot hide information from one another,   But this takes a protocol for composing the response nominated by the several actors concurrently active in responding to what's happening in the scene.  [Broken record.  You can do it in Kohn/Nerode style hybrid control.  Same stuff as the Nemeth translator that Sean pointed us at.]

Al

>
>thanks
> 

Received on Sunday, 31 March 2002 11:42:38 UTC