Natural Language: Beyond the Conversation

Reference:

  Technology Review: Natural Language: Beyond the Conversation
  http://technologyreview.com/articles/print_version/focuson0603_banter.asp

Dave Poehlman found this.  Very relevant to our natural language usage
assistance topic.

Compare with

  Natural Language Usage
   -- Issues and Strategies for Universal Access to Information
  http://www.w3.org/WAI/PF/usage/languageUsageAndAccess.html

--

Natural Language: Beyond the Conversation
Software that analyzes verbal expression is helping computers deal
intelligently with e-mail, audio and video recordings, and other
"unstructured" information.

By Wade Roush
June 2003

Natural language processing is the fashionable term for the study of
software that allows people to interact with computers the same way they
interact with other people: through language. Many of the splashiest
commercial uses for this kind of computing revolve around spoken-language
interactions-automated customer support over the phone, for example. But
"language" doesn't always mean live speech. In fact, techniques similar to
those being used to manage phone calls are helping computers deal more
intelligently with almost any form of digitally stored expression, including
e-mail, audio and video recordings, and the billions of documents on the Web
and on corporate intranets.

805458/head2

For many big companies, coping with the daily onslaught of customer e-mail
can be just as daunting as answering thousands of phone calls. Banter, with
bases in San Francisco and Jerusalem, has developed natural-language
software that helps businesses sort incoming e-mail faster-which means
getting customers the information they want sooner. Banter's system first
analyzes the grammatical structure of a text message, classifying it as a
question, a complaint, or spam. It even identifies emotional cues such as
exclamation points that could signify an angry customer who needs special
treatment. Then the software deduces the general subject of the message, by
extracting domain-specific content-words like loan, account, or overdraft in
a letter to a bank, for example-and using statistical algorithms to match
those words against a database of previous inquiries.

The result: human help-desk operators no longer have to read every e-mail
message in detail, but merely rubber-stamp the Banter system's choice. They
can either forward the message to the right person or department, or select
a canned response. At clients as diverse as Wells Fargo and Nintendo, the
software has tripled the amount of e-mail agents can handle each day,
according to founder and chief technology officer Yoram Nelken. With its
software already built into in e-mail management systems from Siebel, Avaya,
and others, Banter is the leading provider of e-mail analysis software.
"People have seen there is real value in this technology," Nelken says. "It
isn't theoretical anymore."

Banter's system helps people sort through the information coming at them.
Other companies, meanwhile, are turning the technology around to help users
search the stored data on corporate and public computer networks, whether it
be text, numerical data, or multimedia content. Washington, D.C., startup
StreamSage, for example, is seeking to enable searches of audio-visual data
without the need to transcribe and index it. "Streaming media has been used
on the Internet for a long time now," says Tim Sibley, co-founder and chief
scientist at StreamSage. "So here we have all this media-but how do we make
use of it?" An early customer is Harvard Medical School, which has been
using the Web to broadcast streaming video of its classes for the last two
years and will soon employ StreamSage's system to make its video archives
searchable. The software clarifies the meaning of ambiguous nouns and noun
phrases in video recordings by inferring trends in the way they are used in
a large database of examples. For instance, the program can judge whether
the word "Java" indicates the island, the programming language, or the
beverage, based on context. A competing company, Fast Talk Communications of
Atlanta, GA, sells software that uses a simpler method to search audio or
video files: it ignores context and meaning and merely scans for a given
string of phonemes, or word sounds. But it does this blazingly fast. The
company claims its system can search 20 hours of audio or video in one
second (see " The Grammar of Sound, ").

People may also be ready for a better way to search the Internet.
Traditional search engines like Google may be speedy, but questions phrased
in everyday language make them sputter. iPhrase Technologies of Cambridge,
MA, has built software that uses both grammatical and statistical techniques
to decode typed search requests and translate them into highly tuned
database queries. The request "List biotech companies in California with >
$5 million sales," for example, produces a roster of 68 companies-complete
with stock symbols and links to financial performance charts. According to
iPhrase chief technology officer Raymond Lau, one client, the Directory of
Corporate Affiliations, found that usage of its online database doubled
after the company replaced its old search software with iPhrase's
technology. "Since it's a subscription-based service, that's a dramatic
increase in revenue," Lau says.

In some ways, Lau says, natural-language companies that focus on non-speech
information have it easier than voice-services firms like Nuance and
SpeechWorks. "We don't have as many constraints in how we present the
information back to the user, so in that sense it's much easier than doing
it over the phone," says Lau. On the flip side, he says, "we are dealing
with much less structured data." He notes that some 80 to 90 percent of the
information on corporate networks and the Internet-technical manuals,
word-processing documents, PowerPoint presentations and the like-is
unstructured, meaning it hasn't been stored in a database and indexed for
easy access.

That makes it essential for natural-language systems to understand not just
the literal words in a search query, but also the query's meaning.
Otherwise, relevant data phrased in slightly different words might be
overlooked. That' s why IBM is rushing just as fast as smaller companies
like iPhrase to develop software that can sift through unstructured data
more intelligently. The company's Unstructured Information Management
Architecture, under construction since 2002, provides a way to annotate
language so that many different types of natural-language software can work
together to extract meaning from documents.

This architecture will enable the company to combine natural-language
processing, information retrieval, and other techniques  "to make it easier
to analyze unstructured information-to find the relevant knowledge and
organize and deliver it," says David Ferrucci, IBM's main architect behind
the initiative.  Prototype software using the architecture is already up and
running, says Ferrucci. In one application-automated translation-the
architecture is used to annotate sentences in mid-translation, so that IBM
translation programs designed according to different principles of natural
language processing can build on each other's results. The eventual goal: to
help Internet users find relevant unstructured information in many tongues.

Together with real-time speech processing technologies, the profusion of
automated methods for understanding stored language-whether e-mail, video
lectures, or esoteric texts-will transform the way we interact with
computers, creating a truly natural interface.

--

Wade Roush is a Senior Editor at Technology Review.

Received on Wednesday, 28 May 2003 12:42:40 UTC