Re: server side images for geographic mapping data

First, a disclaimer.  This is not an interpretation of what you need to do to
comply with Section 508.  W3C does technology, not policy, and the section 508
regulations set policy for your work.  And this is not a WAI or W3C statement
on your question.

Second, I agree with Rob that the Section 508 implementation activities in the
U.S. Government should have a process for answering questions.  I believe they
may have or be developing this.  I hope we of the WAI can find ways to fit
into
that process in a constructive way.

This disclaimer aside, I will offer my personal short list of technology
assessment and programmatic recommendations for USGS.

First, you need two programmatic threads.  You need one to establish an
initial
operating capability that conforms to Section 508 in terms of what is readily
achievable at the time the initial operational capability goes operational. 
But you need to start now to develop the upgrade sequence for what you do
after
that, because the initial capability is less than what one would want and the
readily achievable capability is improving every month.  Your detailed
technical plans for what you publish should be evolving rapidly toward more
effective map serving methods from the moment your IOC goes online.

Call the thread to get to an initial, conforming operational capability the
IOC
thread, and the more long-term thread the Geography Access thread.

IOC thread (immediate requirements):

All wayfinding, and location-sensitive information retrieval dialogs that are
available through the website are accessible to 508 standards.

Review what services you are offering via this kind of verbally-articulable
services.  Here verbal includes the use of one or two geographic location
reference points.  Benchmark your service offerings against what is available
from other sources, such as the TellMe and BeVocal voice portals.  If they are
doing it, it is viable in an audio dialog.  If it is readily achievable to
serve such a service off the GIS resources you hold, you have to ask
yourselves
why you are not offering that service.

Geography access thread.

Pool U.S. Government Stakeholders who are interested in this issue.  This
includes the intelligence community, map maintenance, and
geographically-related information serving organizations.  I don't know who
else, but there are probably lots of them.

Collaborate with Industry and Academe.  The Open GIS Consortium is a likely
candidate.  Talk to Bill LaPlant at Census about his academic contacts
interested in a Digital Government initiative in this area.

Make the integration of pedestrian-scale information a priority.  What is
missing in the wayfinding applications of today is information for people on
foot, and not in cars.  Pedestrian scale information will encourage the use of
public transit and reduce pollution and energy consumption.  It's good
policy. 
The capability is all there in the resource blending and brokering technology
in research activities such as the Data Intensive Computing Environments
thread
of the National Partnership for Advanced Computational Infrastructure
sponsored
by the U.S. National Science Foundation.  

The critical pieces of technology are level-of-detail filtering in the GIS
resource, and Diagram Access technology on the client side.  Printing to a
Tiger today would be a big step forward for map access, and all it requires is
a little level-of-detail filtering.  

The one thing that is known about diagram access it that it benefits from an
absolute-coordinate pointing device.  Whether you use a tablet, or a haptic
mouse, or a standard mouse, the driver should be in absolute coordinate
mode so
that each position of the pointing device corresponds to a specific pointing
ray in the virtual world.  Standard mouse drivers operate in a floating or
relative coordinate frame.  This should be absolute in an eyes-free user
interface for access to 2D spatial data.

None of the required technology is absent in the state of the art if you look
in the research community.  It is a question of maturing this technology and
disseminating it.  Consider for example the mouse-plus-audio interface that
was
revealed within the last month on this list.  Dig it out of the archives. 
Contact the author.  Try it.  But these are activities that should be
undertaken by the Federal interest pool, not just by USGS acting alone.  Check
out the work at Oregon State and the University of Toronto.  Experiment with
tablets, standard mice, and haptic mice.  Realize that you could very soon
have
an applet that plays accessible SVG maps if you can do the level-of-detail
filtering and conversion to SVG on the server side.  That is a small matter of
programming; there is almost no technical risk to bring this up to even
haptics.  At which point it starts to get really accessible and useful.  But
don't neglect articulable Q&A dialogs in the process.  That is where the
accessible web interface to GIS starts, not with any maps or pointing.  That
should be advanced as well as the pointing-based interfaces are matured.

Here I have just hit a few high spots.  But there is serious engineering work
to be done, in this area.  There are better things that could be done, but are
not reasonable to expect in an IOC.  But this does not mean that it is
reasonable to fail to do them much longer.

Some of what you hold is indeed images.  But most of what you are serving is
not.  City names, the connectivity of roads and cities, etc. are all discrete
property data that could be accessible.  And we are not too far from where the
basic 2D canvas is accessible, but we can't let that dream interfere with
putting as much as possible of the property data into a form that works with
the screen readers in the field.

Even the images are presently processable by image understanding software that
can extract articulable property data and reference grids that the properties
are spread on.  This is amenable to prioritized entification and level of
detail filtering.  So the frontier of how much of the information can
readly be
made accessible is a moving flamefront, not a static surface in information
space.

Al

At 10:16 AM 2001-01-18 -0600, Schuur, Shawn M wrote:
>Hi, I work for the USGS at EROS Data Center.
>
>We have a huge problem and I was looking for some feedback on what to do for
>accessibility and Section 508.  
>
>We are a satellite and aerial photography organization that generates our
>online maps through server-side maps that are dynamically created via a map
>server based on latitude and longitude.   
>
>Obviously a blind user would not be able to see the map or know what they
>were clicking on and keyboard strokes are limited for navigation.  We do
>have alternative searching mechanisms such as searching by a place name or
>by typing in the coordinates but is necessary for the map server to
>dynamically map out the chosen coordinates.
>

This is not true.  You can create an environment where the blind user
understands pointing with either audio or haptics, or even just fixed
coordinates on a tablet.  But you have to do a little more to make it worth
their while.  The "Geography Access Thread" I mentioned above should be
benchmarking current technology solutions with real-user testing NOW because
there are technical solutions that work as far as the user interface is
concerned.  

It is true that the User Interface technology involved is not widely
disseminated among this user community, the way screen readers are.

>What do we need to do to make it accessible? 
>
><http://edcsns17.cr.usgs.gov/EarthExplorer/>http://edcsns17.cr.usgs.gov/Ea
rthExplorer/
>
><http://edc.usgs.gov/Webglis/glisbin/finder_main.pl?dataset_name=MAPS_LARG
E>http://edc.usgs.gov/Webglis/glisbin/finder_main.pl?dataset_name=MAPS_LARGE
>
><http://edc.usgs.gov/doc/edchome/ndcdb/ndcdb.html>http://edc.usgs.gov/doc/
edchome/ndcdb/ndcdb.html
>
>Thank you,
>Shawn Schuur
>EROS Data Center
>(605)594-2776
>  

Received on Thursday, 18 January 2001 14:02:42 UTC