Re: maps and alternate content

In the drarft note on accessibility features in SVG -
http://www.w3.org/1999/09/SVG-access - we have looked at using RDF in images
to provide information such as the relationships that you describe (well, our
example is a lot simpler, but it is there as an example). Since SVG can
create images from components, and useful things can be known or deduced
about them (like their relative positions, how they are connected,
etc) it may serve this kind of goal very well (and it is already XML, with
the added bonus of being able to be represented graphically).

Cheers

Charles McCN

On Thu, 20 Jul 2000, Steven McCaffrey wrote:

  
  Kynn, I agree completely with your statement (snipets below) and is what I too emphasized in the "How To describe flowcharts, ..." thread back in Aug. of 1999.
  I'd like to see developments in an interactive XML-based database driven system where I could ask questions like 
  "What are the states North of Kentucky?" or 
  "How many miles is it from city1 to City2?" just to pick a few rather trivial questions.  As Len Kasday said in the thread on describing flowcharts, the system should go beyond requiring the user to formulate questions and give the user suggestions 
  as to what kinds of relationships exist or what kinds of information can be retrieved.   
    Of course, those with disabilities affecting comprehension of text would need a visual interface, so both are really required.  The haptic mouse is an interface which gives me yet another non-linguistic way to "ask questions".
  In all cases, though, it seems to me a solid, high level database driven system behind the scenes is needed, independent of the input and output modes.  On analogy with CSS, there probably needs to be Device Interface  sheets describing the mapping from the operations of the device to database querries (already under way - e.g. xml- query? and WAP).
  That is, semantic content and device/mode are not mutually exclusive but a two-part whole.
  If there are researchers out there who want to give another mode of access a try, as long as I can get the same information out, great.
  Dr. Raman's EMACSpeak  and ASTER give another interface (speech) but at a much higher level, with the ability for the user to customize higher levels (in effect allowing the user to create his/her own query language).  that's what really needs to be done - customizable, extensible, user-agents and devices and the languages that a user uses to communicate with them.
  "Computer, I'd like to map all word based queries into a sequece of graphical icons" - Of course, there will also have to be a graphical way to "say" this command as well.
  I actually believe, technically speaking, this could be done today.
  (Any more developments from the Voice Browser Activity http://www.w3.org/voice ?, and, is there an analogous Visual Browser Activity which coordinates efforts as to the types of questions asked or clicked on )
  I mean, is there a mapping from spoken query to icon-based query to Haptic-based query?
  Since haptic based interfaces would only give relationships based on physical contiguity  meaning what feels next to where I am currently, it is not clear whether 
  querries based on physical nearaness can capture all the advantages of seeing the "global" view of the map.
   
  There was some research a few years ago, (I havent' seen recent work), on Earcons and 3D Audio and Sonification.
  Again, all very interesting and I think worth pursuing but the
  semantic database back engine needs to be there and mappings from devices/user-agents need to be created.
  
  <smipet>
  This is a very interesting question.  I think, though, that we should
  keep in mind that the -map- is rarely the content we are conveying;
  instead it is the -information on the map-.  So when we consider
  the options, we need to focus on conveying the content of the map,
  not the form.
  </snipet>
  Yes!
  </snipet>
  <snipet>
  You could print off, with a braille printer, a crude map -- I've
  seen a map of the US, for example, at CSUN printed in this way.
  But it may not be the best way to convey the info in the map.
  </snipet>
  Could be, it would depend on what is being conveyed.  I have not read the article, so I don't 
  know the intended scope of the application.
  
  -Steve
  
  Steve McCaffrey
  Senior Programmer/Analyst
  Information Technology Services
  New York State Department of Education
  (518)-473-3453
  smccaffr@mail.nysed.gov
  Member,
  New York State Workgroup on Accessibility to Information Technology 
  Web Design Subcommittee 
  http://web.nysed.gov/cio/access/webdesignsubcommittee.html
  
  

--
Charles McCathieNevile    mailto:charles@w3.org    phone: +61 (0) 409 134 136
W3C Web Accessibility Initiative                      http://www.w3.org/WAI
Location: I-cubed, 110 Victoria Street, Carlton VIC 3053
Postal: GPO Box 2476V, Melbourne 3001,  Australia 

Received on Friday, 21 July 2000 00:00:06 UTC