Multiple interfaces - a concrete example

I'd like to offer a concrete example of a case where multiple interfaces are
used today in commercial practice, and where a single interface would be a
worse solution.

My bank offers customers direct access to their account information.  This
information is stored in a giant database somewhere, and I am able to access
it via two entirely separate interfaces.  

One is a Web site, with a fairly typical 3-box table based layout, HTML
forms, graphical buttons, etc.  It has a lot of information on a single
screen.  Each form has 5-10 fields.  It has a persistant navigation bar at
the top, and another down the left side.  It's pretty and friendly, with
lots of colorful graphics, backgrounds, and branding elements that tie it to
the rest of the bank's promotional material.  This interface is optimized
for sighted users with limited computer experience using version 4 browsers
on desktop computers.  From a usability standpoint, it is a pretty good
example of an interface optimized for that audience.

The second is a menu-based automated telephone response system, allowing
selection of menu items via voice or touch-tone.  Each voice "screen" offers
a menu of 2-5 choices ("press or say 1 for deposits, 2 for withdrawls, 0 to
speak to an operator"), or asks for a single piece of input ("please enter
your checking account number, followed by the pound sign"), or reads some
information ("ATM withdrawl, October 5, $40.00").  The system uses recorded
voice for all it's prompts, and assembles strings of numbers from recorded
elements.  This interface is optimized for voice interaction, and it is also
a pretty usable example.

Now, I submit that replacing the telephone interface with a screen reader
reading the Web site in a synthesized voice would make for a worse interface
to the data than the existing voice system, *even if* the Web site were AAA
compliant.  I also submit that replacing the Web site with a plain-text
heirarchical menuing system (the text version of the telephone system) would
make it a worse and less usable interface for many people (including
sighted, cognitively disabled people). The user of the voice system won't
know that there was a pretty picture of a mouse on the Web site, but he do
know his bank balance. I don't think it's possible to design a single
interface that works as well for both modalities as the optimized interfaces
work for each.  

This isn't a novel idea, or even my idea.  According to T.V. Raman,
"applications that _talk_ and _listen_ need to be designed from the start to
take advantage of the spoken medium; spoken interaction is _different_ from
and in may ways _complimentary_ to traditional visual interaction."
(Auditory User Interfaces, pg 1. _emphasis_ interpreted from print-version
italics)  The technology exists right now to allow users to pick an
interface that is designed for their modality.  This is a GoodThing.

Received on Tuesday, 31 October 2000 16:53:50 UTC