W3C home > Mailing lists > Public > wai-xtech@w3.org > August 2007

Re: Screen-reader behaviour

From: Al Gilman <Alfred.S.Gilman@IEEE.org>
Date: Thu, 30 Aug 2007 13:18:01 -0400
Message-Id: <p06110415c2fca55f7ed1@[]>
To: joshue.oconnor@cfit.ie, "Philip Taylor (Webmaster)" <P.Taylor@Rhul.Ac.Uk>
Cc: HTML Working Group <public-html@w3.org>, wai-xtech@w3.org

At 3:24 PM +0100 30 08 2007, Joshue O Connor wrote:
>Philip Taylor (Webmaster) wrote:
>>  If you are familiar with the technology hands-
>>  on, can you also say whether classes and/or
>>  IDs (as well as elements) are exposed to the
>>  end-user by any system  of which you are aware ?
>Are you talking about the naming conventions etc that you use for the
>nuts and bolts of your applications? Then no - AFAIK.
>Its only the HTML elements and the contents of their corresponding
>attributes that are exposed.
>If anyone else knows otherwise please do expand!

[minor wrinkle: In addition to the semantics of the element itself,
pro-forma additional prompting from assistive technology will
generally include things like the label of a control, and sometimes
labeling is imported from further afield in the context, such as a
parent or grandparent element when there is no suitable
identification on the element itself.]

Screen readers produce one sort of adapted presentation that serves
one set of needs.  That's the most conspicuous case of adapted presentation,
but it's important to realize this is one wing of the envelope of cases
to cover.

WAI-ARIA is emerging practice but it is built on established successes
in the installed-application accessibility domain.  It increases the
availability of critical information to Assistive Technology (AT).

The "accessibility API" services offered on different programming
platforms: operating systems and some language systems, give a
view of the critical information that has been consolidated across
different classes of Assistive Technology: screen readers, on-screen
keyboards, voice command, etc.

Practice by the AT in terms of how much "voice over" prompting to add
based on the markup varies, because different people need different
things. This is even supported with user-set verbosity controls
because one is trading off two rather precious commodities: the time
to get through the task and the risk of getting lost in the middle.

This is why we suggested that HTML WG baseline the common
practices of accessibility APIs as information that should be supported
in programmatically-recognizable form in web content.


+ Support for issues highlighted in Table: 1 of the ARIA Roadmap


There are two reasons why the things the WAI asks for sound very

1) people with disabilities vary widely in their needs, and there are
years of experience in assistive technology and universal design
that have gone into distilling a common-mode abstraction of what
the client side needs from the content production and server segment.

2) historically, the adaptation has been done on the client side.  So
what the content producers need to take responsibility for is the
machine-recognizable wireframe of adaptation-neutral information
that enables the adaptation of presentation without destroying
or perverting understanding.  [particularly led by the Mobile space,
the adaptation is moving somewhat to the server; but the layers
of logic remain, even if applied in different net locations.]

A few quick references:

How people with disabilities use the Web:

Metadata for Content Adaptation Workshop:


Received on Thursday, 30 August 2007 17:18:19 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:51:33 UTC