Inline Speech Synthesis for Clients

John C. Mallery (JCMA@ai.mit.edu)
Tue, 21 Mar 1995 16:22:06 -0500


Message-Id: <9503212139.AA08118@www10.w3.org>
Date: Tue, 21 Mar 1995 16:22:06 -0500
To: Multiple recipients of list <www-html@www10.w3.org>
From: JCMA@ai.mit.edu (John C. Mallery)
Subject: Inline Speech Synthesis for Clients

Is there an advertised way to pass a text string to a client and have the
client synthesize speech from
it?

Most standard computers have speech synthesizers available these days.
Transfering text string
for local synthesis saves bandwith and pushes the computation out to the
clients.

I was considering using this in a question answering system. I would like
to return both html and
some text for the sythesizer to say on display of the html page.

A more elaborate idea would associate inline speech with positions in the
document such that
when they became visibile (via scrolling) they would be queued for synthesis.

If the mechanism provides a means to specify the rendering, one can perhaps
control the voices
via a generic mapping and one might use the same facility for non-speech
inline audio.

Where do we stand on this for HTML 3?