Re: maps and alternate content

Kynn, I agree completely with your statement (snipets below) and is what I too emphasized in the "How To describe flowcharts, ..." thread back in Aug. of 1999.
I'd like to see developments in an interactive XML-based database driven system where I could ask questions like 
"What are the states North of Kentucky?" or 
"How many miles is it from city1 to City2?" just to pick a few rather trivial questions.  As Len Kasday said in the thread on describing flowcharts, the system should go beyond requiring the user to formulate questions and give the user suggestions 
as to what kinds of relationships exist or what kinds of information can be retrieved.   
  Of course, those with disabilities affecting comprehension of text would need a visual interface, so both are really required.  The haptic mouse is an interface which gives me yet another non-linguistic way to "ask questions".
In all cases, though, it seems to me a solid, high level database driven system behind the scenes is needed, independent of the input and output modes.  On analogy with CSS, there probably needs to be Device Interface  sheets describing the mapping from the operations of the device to database querries (already under way - e.g. xml- query? and WAP).
That is, semantic content and device/mode are not mutually exclusive but a two-part whole.
If there are researchers out there who want to give another mode of access a try, as long as I can get the same information out, great.
Dr. Raman's EMACSpeak  and ASTER give another interface (speech) but at a much higher level, with the ability for the user to customize higher levels (in effect allowing the user to create his/her own query language).  that's what really needs to be done - customizable, extensible, user-agents and devices and the languages that a user uses to communicate with them.
"Computer, I'd like to map all word based queries into a sequece of graphical icons" - Of course, there will also have to be a graphical way to "say" this command as well.
I actually believe, technically speaking, this could be done today.
(Any more developments from the Voice Browser Activity http://www.w3.org/voice ?, and, is there an analogous Visual Browser Activity which coordinates efforts as to the types of questions asked or clicked on )
I mean, is there a mapping from spoken query to icon-based query to Haptic-based query?
Since haptic based interfaces would only give relationships based on physical contiguity  meaning what feels next to where I am currently, it is not clear whether 
querries based on physical nearaness can capture all the advantages of seeing the "global" view of the map.
 
There was some research a few years ago, (I havent' seen recent work), on Earcons and 3D Audio and Sonification.
Again, all very interesting and I think worth pursuing but the
semantic database back engine needs to be there and mappings from devices/user-agents need to be created.

<smipet>
This is a very interesting question.  I think, though, that we should
keep in mind that the -map- is rarely the content we are conveying;
instead it is the -information on the map-.  So when we consider
the options, we need to focus on conveying the content of the map,
not the form.
</snipet>
Yes!
</snipet>
<snipet>
You could print off, with a braille printer, a crude map -- I've
seen a map of the US, for example, at CSUN printed in this way.
But it may not be the best way to convey the info in the map.
</snipet>
Could be, it would depend on what is being conveyed.  I have not read the article, so I don't 
know the intended scope of the application.

-Steve

Steve McCaffrey
Senior Programmer/Analyst
Information Technology Services
New York State Department of Education
(518)-473-3453
smccaffr@mail.nysed.gov
Member,
New York State Workgroup on Accessibility to Information Technology 
Web Design Subcommittee 
http://web.nysed.gov/cio/access/webdesignsubcommittee.html

Received on Thursday, 20 July 2000 14:15:10 UTC