W3C home > Mailing lists > Public > w3c-wai-ig@w3.org > July to September 2003

RE: Helping Jaws (et al) pronounce our company name

From: John Foliot - WATS.ca <foliot@wats.ca>
Date: Wed, 24 Sep 2003 16:47:51 -0400
To: <w3c-wai-ig@w3.org>
Message-ID: <GKEFJJEKDDIMBHJOGLENIECMEFAA.foliot@wats.ca>

This is indeed an interesting thread.

After reading about SSML, it still seems somewhat unsure *how* that would
apply to rendered HTML (XHTML?), especially in the context of providing
enunciation/pronunciation "guides" for unusual or "made-up" words.  As well,
while SSML is a Draft XML language, like most XML languages today, it
requires/will require an interpreter application (in this case a speech
synthesizer) to decode the marked-up content, something that would not be
native to the basic off-the-shelf web browser.

Another rambling thought... as Phill points out, Aural Style Sheets are for
describing the audio qualities of the spoken work, not the pronunciation...
for example the word "flugzeug" (German for airplane) could be pronounced
softly by a woman's voice, shrilly by a child's voice or authoritively by a
man's voice via ACSS, but *how* to pronounce it is not defined via the ACSS,
nor can it be in the current ACSS specification.

Which gets me to thinking... The two foremost screen readers are more than
web browsers, in fact they are stand-alone applications which inter-operate
with "standard" Windows applications, including (but not limited to)
Internet Explorer; I suppose that they could be configured to interpret the
SSML document... however the nature of the SSML document then comes into
question.  Will it be a complete rendering of the XML data on any given page
(an alternative page as it were)?  Does it accept or ignore <object>
elements (including images)?  Or is it simply a "glossary" document which
perhaps provides definition, pronunciation, linguistic root or what-have
you? (This is what I would envision...)

Tom Croucher suggested that "Perhaps the best solution would be to allow
authors a pronunciation "style sheet" if you will.", which I believe is on
the right track; create the SSML document to fulfill this function.  I would
suggest then to associate said guide to a document/site via the relative
link:

	<link rel="Pronunciation" href="foobarsite.ssml">

This would have the added benefit of backward compatibility,
interoperability with *any* user agent, etc.  By linking the pronunciation
guide to the (X)HTML document, it makes itself available to all users, not
just those using screen readers.  It (I believe) better addresses the
issues, rather than adding a number of <meta> tags (as has been suggested),
as, like CSS sheets, one guide could be associated to an entire site, as
opposed to going page-by-page.

JF
--
John Foliot  foliot@wats.ca
Web Accessibility Specialist / Co-founder of WATS.ca
Web Accessibility Testing and Services
http://www.wats.ca   1.866.932.4878 (North America)


>
>
> >No tags exist for this but one thing that might help is aural css.
>
> Aural CSS is for styling the speaking, Loud, soft, gender, etc. - not for
> pronunciation.
>
> There is a draft of the Speech Synthesizer Markup language
> http://www.w3.org/TR/speech-synthesis/#S2.1.4
> that covers "say-as" and "phonemes".
>
> And, ACSS is not well (if at all) supported, and my point is that it was
> not designed so that the author/developer could specify how it should
> pronounce the particular word.  Some non-assitive technology
> voice browsers
> are supporting SSML, VoiceXML [see note1], etc.  The screen reader
> developers do not seem to be following the voice browser paradigms.
>
> note 1 VoiceXML implementation report
> http://www.w3.org/Voice/2003/ir/voicexml20-ir.html
>
> Regards,
> Phill Jenkins,  IBM Research, Accessibility Services
> http://www.ibm.com/able
>
>
>
>
>
Received on Wednesday, 24 September 2003 16:47:57 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 19 July 2011 18:14:10 GMT