- From: Gez Lemon <gez.lemon@gmail.com>
- Date: Wed, 14 May 2008 21:33:56 +0100
- To: "Henri Sivonen" <hsivonen@iki.fi>
- Cc: "Matt Morgan-May" <mattmay@adobe.com>, "HTML Working Group" <public-html@w3.org>, "W3C WAI-XTECH" <wai-xtech@w3.org>
On 14/05/2008, Henri Sivonen <hsivonen@iki.fi> wrote: > > On May 14, 2008, at 21:02, Matt Morgan-May wrote: > > A UA can measure font metrics before it draws text. Why wouldn't it measure > speech time of a string before speaking it? Or checking that the string > matches something in its dictionary? > > > > It would have to be, for a UA > > to handle it. But clearly you are pushing responsibility for missing @alt > > from the author to the user and/or the user's AT, which you yourself are > > arguing cannot handle it. > > > > It's a situation AT needs to be deal with anyway--no matter what's > conforming syntactically. > > > > Should they just try harder, then? > > > > Yes. > > > > Who's going to give them advice on how to do that? > > > > I would hope detecting what strings take very long to speak or that don't > appear to contain words from a dictionary is something that AT vendors > wouldn't need external advice on. That isn't how screen readers work. Screen readers work by converting text into phonemes that they can then synthesise and output to the user. This approach is obviously a lot quicker and more flexible than containing a static list of dictionary entries. Gez -- _____________________________ Supplement your vitamins http://juicystudio.com
Received on Wednesday, 14 May 2008 20:34:43 UTC