RE: screen readers and punctuation

Yes, when French is encountered on the CRTC  SITE, I hear French (not French read as English) and when English text is encountered, the synthesizer language switches to English.

Cheers, Karen

-----Original Message-----
From: Karen Lewellen <klewellen@shellworld.net> 
Sent: Thursday, February 14, 2019 6:29 PM
To: Karlen Communications <info@karlencommunications.com>
Cc: 'Pyatt, Elizabeth J' <ejp10@psu.edu>; tink@tink.uk; 'Kalpeshkumar Jain' <kalpeshjain89@gmail.com>; 'Sean Murphy (seanmmur)' <seanmmur@cisco.com>; 'Michellanne Li' <michellanne.li@gmail.com>; 'w3c-wai-ig' <w3c-wai-ig@w3.org>
Subject: RE: screen readers and punctuation

Actually Karen, that  brings up an interesting  question.
Does your screen reader structure read both the English and french on the homepages of some Canadian government websites?
for example...I hope,
www.crtc.gc.ca
Often I find pages that present both the English and french, allowing an individual to choose which  configuration of the site they prefer.
Karen



On Thu, 14 Feb 2019, Karlen Communications wrote:

> Canada is a bilingual country and we often get a document that is in English and French or within a sentence there is a phrase or company name in the other language. In Word or PDF we can assign a language to paragraphs, words or phrases. If the assignment is there, my screen reader/synthesizer will switch to the appropriate language as long as it is supported.
>
> All I am saying is that part of the discussion of pronunciation when it comes to other languages is that if there is another language, and that language is assigned in the coding of the digital content, then we should have access to the correct pronunciation of that language if our adaptive technology supports it. I am not an advocate for localized versions of languages as I am someone who uses a British voice to read instead of an American one. If I were forced to listen to American English it would take me a while to get used to the pronunciations. It would be the same for someone who is used to hearing American pronunciations having to listen to text with British pronunciations. If we use generic languages like English, French or Spanish, then those of us who use screen readers or Text-to-Speech tools would not have to listen to text spoken with pronunciations we don't recognize, our screen readers or Text-to-Speech tools would read the text with our localized synthesized language/pronunciations.
>
> Languages, like special characters or symbols add another layer of 
> complexity in how information is rendered to those of us who use 
> adaptive technology
>
> Cheers, Karen
>
> -----Original Message-----
> From: Karen Lewellen <klewellen@shellworld.net>
> Sent: Thursday, February 14, 2019 3:07 PM
> To: Karlen Communications <info@karlencommunications.com>
> Cc: 'Pyatt, Elizabeth J' <ejp10@psu.edu>; tink@tink.uk; 'Kalpeshkumar 
> Jain' <kalpeshjain89@gmail.com>; 'Sean Murphy (seanmmur)' 
> <seanmmur@cisco.com>; 'Michellanne Li' <michellanne.li@gmail.com>; 
> 'w3c-wai-ig' <w3c-wai-ig@w3.org>
> Subject: RE: screen readers and punctuation
>
> I am of a mind to ask if you seek  more from a screen reader / synthesizer then  one might expect from most humans?
> The ability to read and speak multiple languages  switching from one to another whenever coming across text in an alternate  language  seems quite the gift.  How many  people in general read in this way?  Many synthesizers and screen readers can  read various languages, but I believe they assume that the person using them needs to read in that language consistently, translating things from other languages like English  into
> the   alternate language when required.
> To expect a program and its voice to be happily reading along in English and then, when suddenly coming across Japanese  to read and enunciate in perfect Japanese seems  rather atypical does it not?
> After all, at their best screen readers and synthesizers substitute for eyes and ears if that makes sense.
> Kare
>
>
>
> On Thu, 14 Feb 2019, Karlen Communications wrote:
>
>> With different languages, we run into the problem of our screen readers/synthesizers not supporting a specific language. The example Elizabeth gave of something in Japanese has two component: The text being "tagged for lack of a better word" in the appropriate language and the screen reader/synthesizer being able to switch to that language; and the ability to then pull out any accents for individual characters.
>>
>> For example, if I am using an English voice and encounter French, if the French isn't identified as being French, my screen reader will try to read it in English...without the appropriate pronunciations of words.
>>
>> While I can, as was stated, add something to my screen reader pronunciation dictionary, I can't realistically do this for every word in every language I encounter...and how to I know my interpretation of the text is correct? For different languages, I need the ability of the screen reader/synthesizer to switch to that language so that text is pronounced correctly as I read.
>>
>> Then I may need the ability to go through the text in a more granular way in order to examine accents.
>>
>> Cheers, Karen
>>
>> -----Original Message-----
>> From: Pyatt, Elizabeth J <ejp10@psu.edu>
>> Sent: Thursday, February 14, 2019 12:03 PM
>> To: tink@tink.uk
>> Cc: Kalpeshkumar Jain <kalpeshjain89@gmail.com>; Sean Murphy
>> (seanmmur) <seanmmur@cisco.com>; Michellanne Li 
>> <michellanne.li@gmail.com>; w3c-wai-ig <w3c-wai-ig@w3.org>
>> Subject: Re: screen readers and punctuation
>>
>> I appreciate everyone’s time in reviewing these scenarios. It does seem like there’s been some progress since it looks like both Léone and Karen found a way to detect gaps and activate these pronunciations. In some situations, students might need help knowing where these utilities are.
>>
>> It’s also good to know that I can just use Unicode in these scenarios.
>>
>> Elizabeth
>>
>> P.S. I’m assuming the kanji 柔道 is pronounced like “judo” ;). I also assume a more dedicated Japanese or Asian Studies student would need access to Japanese language packs which vendors like Freedom Scientific, Apple and others make available. I still haven’t found anything for Gaulish yet...
>>
>>> On Feb 14, 2019, at 11:53 AM, Léonie Watson <tink@tink.uk> wrote:
>>>
>>>
>>> On 14/02/2019 15:55, Pyatt, Elizabeth J wrote:
>>>> Leonie:
>>>> I do understand why some punctuation is suppressed, but this isn’t really the scenario that worries me. I’m more worried about technical content like the following cases which I pulled from Wikipedia.
>>>> I would be curious what is pronounced on your screen reader.
>>>> 1. Japanese Culture:
>>>> Judō 柔道  meaning "gentle way" was originally created in 1882 by Jigoro Kano (嘉納治五郎) as a physical, mental and moral pedagogy in Japan.
>>>> Note: Judo was spelled with a long o and is followed by Japanese Kanji characters.
>>>
>>> The word judo was pronounced as it should be (like joodo). The Kanji was not announced but there was a noticeable gap as my screen reader read the entire chunk of content.
>>>
>>> I have now copied the Kanji into my screen reader's custom dictionary, where I can configure it to announce whatever might be appropriate (if only I understood Kanji).
>>>
>>>
>>>> 2. English phonetics
>>>> Most varieties of English have syllabic consonants in some words, principally [l̩, m̩, n̩], for example at the end of bottle, rhythm and button. … However phonologists prefer to identify syllabic nasals and liquids phonemically as /əC/. Thus button is phonemically /ˈbʌtən/ or /ˈbɐtən/ and bottle is phonemically /ˈbɒtəl/, /ˈbɑtəl/, or /ˈbɔtəl/.
>>>> Note: The first bracket contains 3 symbols - l,m,n with a vertical bar beneath. The second is schwa + C. The last set of brackets are all variant transcriptions of button and bottle with variant vowel symbols.
>>>
>>> In those cases my screen reader recognises those symbols, even though it doesn't speak them correctly. This seems to stray into pronunciation rather than character identification though, and there is work just starting at the W3C to look at solutions for pronunciation with synthetic speech.
>>>
>>>
>>> Again, the same technique applies. It's the same technique I used before emoticons and emoji were common on the web, I would configure my screen reader to speak ":)" as "smile" or "smiley face".
>>>
>>>> In some cases enclosing content in parentheses, slashes or brackets can indicate technical content is present even if the screen reader fails to say anything. A student taking Japanese or phonetics can also upgrade their symbol file, but it’s not an upgrade currently needed by sighted users since the phonetic fonts are provided by Apple and Windows.
>>>> I hope this clarifies my concern.
>>>
>>> It does, but I think it may be less of a concern than you believe. It can be a problem, but more often than not it's a problem that can be solved, as Kern explained.
>>>
>>>
>>>
>>> Léonie
>>>> Elizabeth
>>>>> On Feb 14, 2019, at 10:08 AM, Léonie Watson <tink@tink.uk> wrote:
>>>>>
>>>>> On 14/02/2019 14:33, Pyatt, Elizabeth J wrote:
>>>>>> I know that pronunciation of some symbols may vary with context but even a scrambled pronunciation of an exotic symbol is better than skipping it all together.
>>>>>
>>>>>> Again comparing this to what sighted users experience, if a person is reading a document with a symbol that the font can’t display, the reader normally sees a “?” or “X” character. There’s an indication that something is there and that a font upgrade might be needed in order to view the entire document.
>>>>>
>>>>> This isn't a good comparison because the two scenarios are different. It isn't that screen readers don't recognise punctuation and symbols, it's that they're configured to ignore them (often as a conscious choice by the user).
>>>>>
>>>>> Maths and MathML is a different thing, but in terms of the punctuation and symbols used in typical content, there is a really good reason why screen readers are configured the way they are.
>>>>>
>>>>> Let's take this example: "Hello, how are you?".
>>>>>
>>>>> When configured to speak all symbols and punctuation, this is what a screen reader says:
>>>>>
>>>>> Let apostrophe s take this example colon quote How are you question quote period.
>>>>>
>>>>> That is unusable.
>>>>>
>>>>> When the screen reader is configured to speak only important punctuation (like the @ in an email address for example), then the screen reader reads it like a human would.
>>>>>
>>>>> Let's take this example: "Hello, how are you?".
>>>>>
>>>>> It pauses for the colons and commas, it elevates in pitch to signify the question, and it pauses a little longer at the full stop.
>>>>>
>>>>> Your sentence is a good example of why missing symbols are often easy to spot. This is what my screen reader announced:
>>>>>
>>>>> "...symbol that the font can’t display, the reader normally sees a or “X” character."
>>>>>
>>>>> There was an obvious gap between "sees a" and "or", so I went to explore and found the question mark in quotes.
>>>>>
>>>>>> A sighted user can determine if it’s worth the trouble to get a new font, but at least they know it’s an option. When screen readers skip symbols, the user can’t easily determine if there is an issue. A screen reader user could choose to disable that function, but that would be the choice of the person not the technology.
>>>>>
>>>>> In many cases blind people are part of the teams that create screen readers, and so the default configurations are based on practical experience. Punctuation is also a commonly changed configuration by even the most inexperienced screen reader users.
>>>>>
>>>>>
>>>>> Léonie
>>>>>> Elizabeth
>>>>>>> On Feb 14, 2019, at 2:14 AM, Kalpeshkumar Jain <kalpeshjain89@gmail.com> wrote:
>>>>>>>
>>>>>>> I have had a similar experience with different SR and punctuations/symbols reading behavior in one of the projects I worked on recently.
>>>>>>> It was bit frustrating that SR was ignoring simple symbols like '+, -, *, /, <, etc.'
>>>>>>> Using MathML for simple expressions was not feasible in my situation.
>>>>>>>
>>>>>>> Instead of using the symbols as is, we used their respective HTML character codes.We referred below link to get the entities:
>>>>>>> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2
>>>>>>> F
>>>>>>> w
>>>>>>> ww.rapidtables.com%2Fweb%2Fhtml%2Fhtml-codes.html&amp;data=02%7C
>>>>>>> 0
>>>>>>> 1
>>>>>>> %7Cejp10%40psu.edu%7Cddf40f8afff34fc29c8d08d6929d0ae7%7C7cf48d45
>>>>>>> 3
>>>>>>> d
>>>>>>> db4389a9c1c115526eb52e%7C0%7C0%7C636857600536148013&amp;sdata=9G
>>>>>>> k
>>>>>>> U
>>>>>>> 1dJaBw41NhuEGbdu6hRK4POGQRH1G6N%2BazaRj58%3D&amp;reserved=0
>>>>>>>
>>>>>>> The result was an improvement in the reading behavior. SR were identifying the symbols.
>>>>>>> However it was still not 100% coverage.
>>>>>>>
>>>>>>> Ultimately, we had to add a disclaimer stating SR might skip 
>>>>>>> some symbols We had to leave the choice of enabling the setting to read all punctuations in SR tools to the User as that cannot be done programmatically.
>>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Kalpeshkumar Jain
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Feb 14, 2019 at 4:53 AM Sean Murphy (seanmmur) <seanmmur@cisco.com> wrote:
>>>>>>> The versions of screen readers here being used are very old. Also the punctuation is very dependent on context. As if you are using a math or programming. The <= will mean something different than if it is used for identifying how the flow of processes goes. Such as 1 <= 3 is a maths equation. But if I say process1 <= process2 providing context of order of process means something else. I  wouldn’t want the 2nd example to say less than or equal too. Also it is a lot less content to comprehend hearing <= than the full words. A screen reader user gets used to how things are spoken. The brain is an amazing program or computer within itself.
>>>>>>>
>>>>>>>    I have not tested this myself. But if a page was using Math-l would the screen reader use the < = or the full words?
>>>>>>>
>>>>>>>  <image001.png>
>>>>>>>
>>>>>>> Sean Murphy
>>>>>>>
>>>>>>> SR ENGINEER.SOFTWARE ENGINEERING
>>>>>>>
>>>>>>> seanmmur@cisco.com
>>>>>>>
>>>>>>> Tel: +61 2 8446 7751
>>>>>>>
>>>>>>>        Cisco Systems, Inc.
>>>>>>>
>>>>>>> The Forum 201 Pacific Highway
>>>>>>>
>>>>>>> ST LEONARDS
>>>>>>>
>>>>>>> 2065
>>>>>>>
>>>>>>> Australia
>>>>>>>
>>>>>>> cisco.com
>>>>>>>
>>>>>>> <image002.gif>
>>>>>>>
>>>>>>> Think before you print.
>>>>>>>
>>>>>>> This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.
>>>>>>> Please click here for Company Registration Information.
>>>>>>>    From: Michellanne Li <michellanne.li@gmail.com>
>>>>>>> Sent: Thursday, 14 February 2019 2:40 AM
>>>>>>> To: w3c-wai-ig@w3.org
>>>>>>> Subject: screen readers and punctuation
>>>>>>>
>>>>>>>  Hello all,
>>>>>>>
>>>>>>>  I just read this piece from Deque on how screen readers address punctuation: Why Don’t Screen Readers Always Read What’s on the Screen? Part 1: Punctuation and Typographic Symbols.
>>>>>>>
>>>>>>>  Since it was written in 2014, I am wondering if screen reader technology has since been updated to better read out important symbols.
>>>>>>>
>>>>>>>  Thanks!
>>>>>>>
>>>>>>>  Michellanne Li
>>>>>>>
>>>>>>> (512) 718-2207
>>>>>>>
>>>>>>> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2F
>>>>>>> w
>>>>>>> w
>>>>>>> w.michellanne.com&amp;data=02%7C01%7Cejp10%40psu.edu%7Cddf40f8af
>>>>>>> f
>>>>>>> f
>>>>>>> 34fc29c8d08d6929d0ae7%7C7cf48d453ddb4389a9c1c115526eb52e%7C0%7C0
>>>>>>> %
>>>>>>> 7
>>>>>>> C636857600536148013&amp;sdata=LOqeDKkb%2BleyQKl8peHvPU20CnplOSOT
>>>>>>> o
>>>>>>> K
>>>>>>> a1D4WbEO4%3D&amp;reserved=0
>>>>>>>
>>>>>> =-=-=-=-=-=-=-=-=-=-=-=-=
>>>>>> Elizabeth J. Pyatt, Ph.D.
>>>>>> Accessibility IT Consultant
>>>>>> Teaching and Learning with Technology Penn State University 
>>>>>> ejp10@psu.edu, (814) 865-0805 or (814) 865-2030 (Main Office) The
>>>>>> 300 Building, 112
>>>>>> 304 West College Avenue
>>>>>> State College, PA 16801
>>>>>> accessibility.psu.edu
>>>>>
>>>>> --
>>>>> @LeonieWatson Carpe diem
>>>> =-=-=-=-=-=-=-=-=-=-=-=-=
>>>> Elizabeth J. Pyatt, Ph.D.
>>>> Accessibility IT Consultant
>>>> Teaching and Learning with Technology Penn State University 
>>>> ejp10@psu.edu, (814) 865-0805 or (814) 865-2030 (Main Office) The
>>>> 300 Building, 112
>>>> 304 West College Avenue
>>>> State College, PA 16801
>>>> accessibility.psu.edu
>>>
>>> --
>>> @LeonieWatson Carpe diem
>>
>> =-=-=-=-=-=-=-=-=-=-=-=-=
>> Elizabeth J. Pyatt, Ph.D.
>> Accessibility IT Consultant
>> Teaching and Learning with Technology Penn State University 
>> ejp10@psu.edu, (814) 865-0805 or (814) 865-2030 (Main Office)
>>
>> The 300 Building, 112
>> 304 West College Avenue
>> State College, PA 16801
>> accessibility.psu.edu
>>
>>
>>
>>
>
>
>

Received on Friday, 15 February 2019 14:03:16 UTC