APA's Pronunciation Specification -- Can we talk?

Hi, Leonie:

As you will undoubtedly recall, APA has a task force working on
developing a overlay specification intended to provide correct
pronunciation from TTS engines anywhere they're available, and the page
author wants to ensure content will be correctly pronounced. This work
has progressed with development of use cases, requirements, and a gap
analysis. It's now at the stage of starting a normative specification.

At some convenient time for you, could we perhaps discuss the path
forward for this specification on a regular Wednesday APA call? That
would be no earlier than January now, of course.

Our thought has been to mature the specification short of an FPWD
publication within APA, before moving it to WICG for final steps in the
process.

Part of our rationale for not fully developing the specification in APA
is that we have endeavored to develop an approach that can serve non
accessibility scenarios as well as accessibility scenarios. We believe
we're on a good path to achieve that goal.

Please advise when we might discuss next steps and timelines involving
our several groups. Roy's email (attached) reminds me that I'm a bit
overdue sending this request to you.

Wishing you the happiest of hollidays at year's end,

Janina


-- 

Janina Sajka

Linux Foundation Fellow
Executive Chair, Accessibility Workgroup:	http://a11y.org

The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
Chair, Accessible Platform Architectures	http://www.w3.org/wai/apa

Forwarded message 1

  • From: Michael Cooper <cooper@w3.org>
  • Date: Wed, 18 Dec 2019 08:24:55 -0500
  • Subject: Re: [call for review][draft] Proposal: pronunciation technical approach
  • To: Roy Ran <ran@w3.org>, Janina Sajka <janina@rednote.net>, "Ali, Irfan" <iali@ets.org>
  • Message-ID: <60af90a3-44f9-2cbe-92fc-2eb37e8dc0a6@w3.org>
Dear Web Incubator Community Group -

I approaching you from the pronunciation Task force [1] which belong to 
W3C Accessible Platform Architectures WG [2]. The goal of the Task Force 
is to provide normative specifications and best practices guidance for 
text to speech (TTS) synthesis to provide proper pronunciation of HTML 
content in spite of ambiguous scenarios related to accents, homonyms, 
unknown words, etc. We have done some initial analysis in APA WG, and 
now we're thinking that this technology might have broader use cases and 
user scenarios. In addition, we think it needs integration into the HTML 
language. The current proposal is to incorporate a subset of Speech 
Syntheseis Markup Language (SSML) [3] into HTML, and we would like to 
explore this further in the WICG.

To further introduce the goals of the pronunciation work, the 
Pronunciation Overview [4] describes the proposal for a mechanism to 
allow content authors to include pronunciation guidance in HTML content. 
The objective of the Presentation Task Force is to develop normative 
specifications and best practices guidance collaborating with other W3C 
groups as appropriate, to provide for proper pronunciation in HTML 
content when using text to speech (TTS) synthesis. Such guidance can be 
used by assistive technologies (including screen readers and read aloud 
tools) and voice assistants to control text to speech synthesis. A key 
requirement is to ensure the spoken presentation content matches the 
author's intent and user expectations.

Explainer:

There is an explainer to show our proposal, the explainer at:

  * https://github.com/w3c/pronunciation/blob/master/docs/explainer.md

At present, we also have released three supporting documents for this 
technology, they all are W3C Note-track documents:

  * Pronunciation User
    Scenarios:https://www.w3.org/TR/pronunciation-user-scenarios/
  * Pronunciation Use Cases: https://www.w3.org/TR/pronunciation-use-cases/
  * Pronunciation Gap
    Analysis:https://www.w3.org/TR/pronunciation-gap-analysis/

Technical document:

Our technical document is Rec-track and is currently in the editor's 
draft stage. This is the document which we hope to incubate further in 
WICG. The document is currently in the pronunciation repository; if WICG 
agrees to the proposal, we will move the document to WICG's repo. The 
technical document at:

  * https://w3c.github.io/pronunciation/technical-approach/

Those are all information for our work, we are looking forward your 
rely. Thank you.

[1] https://www.w3.org/WAI/APA/task-forces/pronunciation/
[2] https://www.w3.org/WAI/APA/
[3] https://www.w3.org/TR/speech-synthesis/
[4] https://www.w3.org/WAI/pronunciation/

Best Regards,

Roy

On 10/12/2019 4:50 a.m., Roy Ran wrote:
>
> Hi all,
>
> Below is an email I drafted about our transfer to WICG and the purpose 
> of this email is to apply to WICG's chairs to put our work in their 
> CG. Please help to review it.
>
> Meanwhile, since I am not a native speaker, please also help to review 
> the language part. Thank you very much.
>
> -------------------------------------------------------------------------------------------------------------------
> Iam come fromthe pronunciation Task force [1] which belong to W3C APA 
> WG[2]. OurTask Forcegoal is to provide normative specifications and 
> best practices guidance so that text to speech (TTS) synthesis can 
> provide proper pronunciation of HTML content.We have done some initial 
> analysis works in APA WG, and now we're thinking about this technology 
> that might have broader usecases and user scenarios, we hope this work 
> get more attention and inputs from widely communities,so I email to 
> you to seek possible to have our normative document incubated in WICG. 
> Beloware some informationabout our Task force and work.
>
> Pronunciation TF started to active in March 2019, and the 
> pronunciation [3] is a proposal for a mechanism to allow content 
> authors to include pronunciation guidance in HTML content.our 
> objective of the Presentation Task Force is to develop normative 
> specifications and best practices guidance collaborating with other 
> W3C groups as appropriate, to provide for proper pronunciation in HTML 
> content when using text to speech (TTS) synthesis.Such guidance can be 
> used by assistive technologies (including screen readers and read 
> aloud tools) and voice assistants to control text to speech synthesis. 
> A key requirement is to ensure the spoken presentation content matches 
> the author's intent and user expectations.
>
> *Explainer:*
>
> There is an *explainer* to show our proposal, the explainer at:
>
>   * https://github.com/w3c/pronunciation/blob/master/docs/explainer.md
>
> At present, we also have released three supporting documents for this 
> technology, they allareW3C note:
>
>   * Pronunciation User
>     Scenarios:https://www.w3.org/TR/pronunciation-user-scenarios/
>   * Pronunciation Use Cases:
>     https://www.w3.org/TR/pronunciation-use-cases/
>   * Pronunciation Gap
>     Analysis:https://www.w3.org/TR/pronunciation-gap-analysis/
>
>
> *Technical document:*
>
> Our *technical document* will be a W3C rec and is currently in the 
> editor note stage, this is the normative document which we hope will 
> be edited in WICG in the future. The document is currently in our 
> repo, if WICG agrees to our proposal, we will move the document to 
> WICG's repo.The technical document at:
>
>   * https://w3c.github.io/pronunciation/technical-approach/
>
> Those are all information for our work, we are looking forward your 
> rely. Thank you.
>
>
> [1] https://www.w3.org/WAI/APA/task-forces/pronunciation/
>
> [2] https://www.w3.org/WAI/APA/
>
> [3] https://www.w3.org/WAI/pronunciation/
>
> Best Regards,
>
> Roy
>
>

Received on Wednesday, 18 December 2019 13:46:59 UTC