RE: Draft Comment for Discussion -- a11y-review issue 232

Hello all,

 

Thanks everyone for your comments. I agree with all the feedback that
developers should bear in mind that users may be using TTS or AT, and that
they may have specific routing and volume management requirements as a
result. As the spec mentions, the underlying platform often has ways to
handle this.

 

The key thing however is that we don't want the page itself to be aware that
such TTS or AT are being used - this is something the browser may know, but
must not be exposed to the page. The current status quo in which the OS
manages volumes of different sources (the entire browser app being one) is
compatible with that privacy principle, and we wouldn't want to change it.

 

Most of the spec seems to be focused on how different sources (and sinks) of
audio within the page would interact with each other, and again that seems
fine to me.

 

What isn't clear to me from reading the spec is whether this is also
covering something like the page giving hints to the OS, via the browser
app, about how the OS should manage volumes. If it were, then we can still
prioritise AT by simply having the browser ignore such hints when it knows
AT is running. But as that's not clear to me, it seems we should ask.

 

So, regarding the proposed comment, I think it's all fine but we must
preface it with a clarification that reinforces two things: (1) that we
don't want the page to know if AT is running; and (2) asking the question
about the scope of the spec - whether it's exclusively about in-page audio
management, or whether their mention of the 'underlying platform' implies
there's some sort of hinting to the OS going on there.

 

Does that make sense?

 

Best regards,

 

 

Matthew

 

Matthew Atkinson

Head of Web Standards

Samsung R&D Institute UK

Samsung Electronics

+44 7733 238 020

 

Samsung R&D Institute (SRUK), Communications House, South Street,
Staines-upon-Thames, Surrey, TW18 4QE. A division of Samsung Electronics
(UK) Limited, a limited company registered in England and Wales with
registered number 03086621 and whose registered address is Samsung House,
2000 Hillswood Drive, Chertsey, Surrey, KT16 0RS, UK. This email (including
any attachments) is private and confidential, and may be privileged. It is
for the exclusive use of the intended recipient(s). If you have received
this email in error, please inform the sender immediately and then delete
this email. Unless you have been given specific permission to do so, please
do not distribute or copy this email or its contents. Unless the text of
this email specifically states that it is a contractual offer or acceptance,
the sender does not intend to create a legal relationship and this email
shall not constitute an offer or acceptance which could give rise to a
contract. Any views expressed in this communication are those of the
individual sender, except where the sender specifically states them to be
the views of Samsung.

 

From: Ramakrishnan, Sharath Chandra <scram@illinois.edu> 
Sent: 28 May 2025 16:33
To: Janina Sajka <janina@rednote.net>; W3C WAI Accessible Platform
Architectures <public-apa@w3.org>
Subject: Re: Draft Comment for Discussion -- a11y-review issue 232

 

Thank you Janina for your thoughtful comments which I agree with.

 

And Hello all, 

 

You all may remember me as I responded to your courtesy welcome email a few
months ago. I have been following the proceedings of your emails since, but
the 9am meetings continue to evade me, as we are still ambling towards a
punctual daycare drop off routine.

 

Anyway, I 100% agree with Janina, that the use of non-speech sounds
(auditory displays and sonification) needs acknowledgement and design
consideration for all future web-audio specs.

 

While I have been developing auditory displays for the last decade, I
constantly encounter constraints with webaudio. Over the last few years my
focus has shifted to an even more unaddressed issue : the use of earcons
compatible with the capacities of assistive hearing technologies. 

 

yes ! rarely do folks think of accessibility beyond the sensory-substitution
metaphor.

 

Without going into the ableist history of the speech vocoder's development,
I will also add that a common concern raised by the hearing aid and cochlear
implant listener community, is not having the ability to shift down the
formants of 'Google lady' or TTS voices down to the sub-100Hz speech formant
frequencies, to make it more intelligible and less annoying. 

 

I hope I can contribute more, as I wrap my head around all the different
meetings you all have scheduled over the coming weeks!

 

Looking forwards

Regards

 

SHARATH CHANDRA RAMAKRISHNAN

(M.S., Ph.D.)

Perceptual Futures Laboratory 

Assistant Professor, School of Art & Design

Affiliate Assistant Professor, School of Information Sciences

University of Illinois Urbana-Champaign
scram@illinois.edu <mailto:scram@illinois.edu> 

 <http://illinois.edu/> 

Under the Illinois Freedom of Information Act any written communication to
or from university employees regarding university business is a public
record and may be subject to public disclosure.

  _____  

From: Janina Sajka <janina@rednote.net <mailto:janina@rednote.net> >
Sent: 28 May 2025 07:24
To: W3C WAI Accessible Platform Architectures <public-apa@w3.org
<mailto:public-apa@w3.org> >
Subject: Draft Comment for Discussion -- a11y-review issue 232 

 

Colleagues:

I'm a bit weak on the mechanics of editing a github issue, so am posting
my proposed draft text here.

Last week I agreed to further refine our proposed comments on
a11y-review issue 232:

https://urldefense.com/v3/__https://github.com/w3c/a11y-review/issues/232__;
!!DZ3fjg!_y7ve32rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4gpNzzHn
oGbzpagZNwXpg$
<https://urldefense.com/v3/__https:/github.com/w3c/a11y-review/issues/232__;
!!DZ3fjg!_y7ve32rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4gpNzzHn
oGbzpagZNwXpg$>  

I'm proposing an additional paragraph to account for last week's
teleconference discussion as denoted below:

<begin issue text>
In reviewing "how audio is rendered and interacts with other audio" in
the Audio Session API
https://urldefense.com/v3/__https://www.w3.org/TR/audio-session/*abstract__;
Iw!!DZ3fjg!_y7ve32rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4gpNzz
HnoGbzpav1nZ-lP$
<https://urldefense.com/v3/__https:/www.w3..org/TR/audio-session/*abstract__
;Iw!!DZ3fjg!_y7ve32rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4gpNz
zHnoGbzpav1nZ-lP$>  , APA
are concerned with implications for audio generated by TTS--which could
be AT generated audio in a user's environment. We see no mention of TTS
in the spec.

Did we miss it? Clearly any AT use of TTS would make that audio the most
critical audio to the user.  Should we be concerned?  Should the API
call out TTS as of particular sensitivity, do we then risk user privacy
concerns?

<proposed new paragraph begins>
Of course most, though not all AT is provided and managed by the OS,
obviating privacy concerns. Audio in AT is not limited to TTS however.
Also, there are additional uses of audio for accessibility reasons. One
key use is a short distinctive audio compositions used as signage to
signal a specific event to the user in a manner that communicates far
more quickly than TTS, allowing TTS to function as a confirmation. These
short audio compositions generally come as sets of files and are mapped
to common events (such as dialog popups) , and are known variously as
[https://urldefense.com/v3/__https://en.wikipedia.org/wiki/Earcon*5D(auditor
y__;JQ!!DZ3fjg!_y7ve32rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4g
pNzzHnoGbzparEMCigB$  icons,</a> or more
descriptively as <q>Sonicons</q> or <q>Earcons.</q> At a minimum APA
considers that developers should be aware of this wider use of audio.
Certainly, when it's web-based AT, specific considerations would then
pertain.

BTW:  Your API publication has caused us to reconsider our
self-assessment questionnaire.  It should have exposed questions such as
these! Consequently we'll now be adding audio-related self-assessment
questions as we revamp our tooling.
             
-- 

Janina Sajka (she/her/hers)
Accessibility Consultant
https://urldefense.com/v3/__https://linkedin.com/in/jsajka__;!!DZ3fjg!_y7ve3
2rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4gpNzzHnoGbzparBXMb9x$
<https://urldefense.com/v3/__https:/linkedin.com/in/jsajka__;!!DZ3fjg!_y7ve3
2rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4gpNzzHnoGbzparBXMb9x$>


The World Wide Web Consortium (W3C), Web Accessibility Initiative (WAI)
Co-Chair, Accessible Platform Architectures
https://urldefense.com/v3/__http://www.w3.org/wai/apa__;!!DZ3fjg!_y7ve32rLFL
RKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4gpNzzHnoGbzpaqZQ82NA$
<https://urldefense.com/v3/__http:/www.w3.org/wai/apa__;!!DZ3fjg!_y7ve32rLFL
RKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc4gpNzzHnoGbzpaqZQ82NA$>  

Linux Foundation Fellow
https://urldefense.com/v3/__https://www.linuxfoundation.org/board-of-directo
rs-2/__;!!DZ3fjg!_y7ve32rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc
4gpNzzHnoGbzpas3N56lX$
<https://urldefense.com/v3/__https:/www.linuxfoundation.org/board-of-directo
rs-2/__;!!DZ3fjg!_y7ve32rLFLRKmmCNI4v2G1TLe0KvkLbcPAGhlfbzy0vQTAw4tWsTDTEfxc
4gpNzzHnoGbzpas3N56lX$>  

Received on Wednesday, 4 June 2025 13:20:00 UTC