W3C home > Mailing lists > Public > public-atag2-comments@w3.org > December 2011

Submission re: ATAG (from Wayne Dick)

From: Richards, Jan <jrichards@ocadu.ca>
Date: Thu, 22 Dec 2011 19:41:36 +0000
To: "public-atag2-comments@w3.org" <public-atag2-comments@w3.org>
Message-ID: <0B1EB1C972BCB740B522ACBCD5F48DEB039F1EF5@ocadmail-maildb.ocad.ca>
Wayne Dick submitted the following to me (Jan Richards) on 18 Nov 2011. The zip version is attached. The plain text version appears below:

The Two Models of Assistive Technology

Visual readers with low vision (VR/LV)  are overlooked by W3C standards.  We do not have the assistive technology we need for the simple task of reading.  We also do not have effective authoring tools, and the situation is deteriorating.  New authoring tools do not protect our need for access to style, and they are getting harder to use.  

While the accessibility community focuses most of its efforts on access to active content, these basic problems for VR/LV remain unaddressed, but not unsolved.  The technology to eliminate almost all reading problems to VR/LV has been well established since the late 1990's with CSS 1 and HTML 4.01. More recently Daisy or EPub with MathML solve everything regarding reading.  Visual readers with low vision need complete access to style through semantic markup in languages that provide template structures for users to override author's choices.  If authoring tools are to be effective they must produce data formats that can be changed through user controlled template structures like style sheets.  They must help authors to write content that encourages user style modification.  Style is the barrier for VR/LV: font face, color (back and fore), spacing (line, word and letter) and font size control with smart enlargement with word wrapping.  This is well established.  Content with embedded formatting that does not include a markup and style structure that enables user change of text format of the author are not accessible.  Authoring tools that encourage these formats are not accessible authoring tools. 

Regarding the user interface for the tools.  Visual readers with low vision, have about 25 to 60 words per page to view.  Any structure like a panel, ribbon or toolbar, that cuts that content area is a barrier.  Well organized menu systems have worked well for VR/LV to operate software. This works very well when the operating system enables these users to choose the font size of their menus, and authoring software respects these choices. While toolbars, panels and ribbons are very popular with fully sighted users, visual readers with low vision are seriously harmed by the loss of document content space.  Converting a panel, ribbon, toolbar system to a well organized menu system requires significant knowledge of authoring tool's specific architecture and philosophy of use. 

There has never been serious pressure to provide rich access to visual style, or to consider the differences in UI needed the visual readers with low vision, the majority of people with visual impairments.  The result is that VR/LV have weak tools to approach reading and authoring, and this population has poor access to social participation.

One problem is the way assistive technology (AsTech) is viewed in the accessibility community.  There are two models, the External Model and the Internal Model.  In most standards documents, the Internal Model is absent or not given extensive treatment.  This is probably because the most prominent AsTech is the screen reader, an external AsTech.  The Internal Model is not well supported by standards, so it is not surprising to find that excellent products like the IBM Homepage Reader that use the Internal Model have a much higher mortality rate than External Model AsTech.

We will explore this issue, because at present Internal Model AsTech has a much better chance of meeting the needs of VR/LV than do External Model AsTech products. This issue is tightly wrapped with accessibility standards especially two concepts: programmatic determination and its accessibility support. First we look at the models of assistive technology.
External Assistive Technology

When most people think of AsTech they envision technology that sits outside the application program application program (App). This is the external model. The App can be a web browser, an authoring tool like Word or Dreamweaver, a mail client or any of thousands of applications.  The external AsTech reads semantic information from the App sent trough an Accessibility Application Programming Interface (AsAPI). The AsTech passes it to the user through its own output interface using operating system utilities.  The external AsTech also transforms user input from alternative devices through the AsAPI to the App for normal processing.  The primary goal of external AsTech is to mediate input and output so the person with a disability can perceive, operate and understand the App.

As mentioned earlier the primary exemplar of the external AsTech is the screen reader, but there are many other technologies that use this model.  This is a popular model because it enables web content and App developers to follow deterministic rules to make their products available to AsTech. The responsibility of accommodation is passed on to external agents (AsTech) that have better understanding of the accommodations requires by specific disability groups.

The external model is efficient, and when it works it is effective. Only one AsTech program needs to written for the entire App space on a platform, and technical skill is divided well. Content and App developers concentrate on developing content and applications. AsTech developers concentrate their skills on the needs of their disability group.

The external model works well for some disability groups but not as well for others.  Developing an external browser capability for visual readers with low vision would entail developing a completely separate browser.  All of the rendering software and GUI would have to be built from scratch.  Building a universal menu interface for toolbar, panel and ribbon driven authoring tools would require a complete change in how we develop software.  This would be true with almost every other kind of App.  Visual readers would have to live in a separate application universe, and this separate world would be about as equal as segregated schools were in the US before 1954. 
Internal  AsTech

The other model is the internal / plug-in model.  The most famous of these was the late IBM Homepage Reader.  Less famous but more powerful was the late IBM WebAdapt2Me, a program that, had it survived, would have solved access problems for visual readers with low vision completely.  The internal model is not generic like external AsTech.  You must have an internal AsTech for each application.  Internal AsTech is an extension or plug-in to the App. These AsTech programs are hard maintain for institutional reasons.  They are  dependent of contractual agreements with individual vendors that permit the applications to penetrate the security shields of App software. 

The most fertile ground for internal AsTech is the open source world, where extensions are encouraged.  Click Speak was an example. It was a Firefox extension using the open source extension capability to enable access to web pages inside the security of Firefox.

Click Speak was wonderful.  You could point at what wanted to read with mouse.  It was easy to use the mouse, unlike  screen readers that are punitive with their access to point devices.  Click Speak did not chatter you all the time, distracting you from the difficult task of reading the screen with compromised vision.  While screen reader users need this constant audio direction, it is a significant barrier to visual concentration for VR/LV.  Click Speak was a polite program.  It talked to you when you wanted talking; it was quiet when you needed silence to use your partial sight, and it tracked what was being read with visual highlights when you wanted to synchronize your text-to-speech with visual reading. Sadly, open source authors move on in life and their products die.  We thank Charles Chen for his contribution to reading with low vision and really feel the loss of Click Speak passing into disrepair.

The platform AsAPI does not help internal AsTech.  In mainstream applications that support extensions, the internal AsTech interprets messages and content from inside the App.  The internal AsTech then leverages the semantic analysis and I/O interface of the mainstream App to provide an alternative view of the application.  When the visual environment needs to be changed, this approach works very well.  A mainstream application invests considerable resources into its visual interface,  if it allows AsTech extensions to share these resources the division of labor witnessed with external AsTech can be realized in the Internal AsTech Model as well.  An internal communication structure like the Document Object Model in browsers, still frees the App developers to passing on deterministic structures on to AsTech developers through an internal interface.  App developers can focus on the needs of the App and the AsTech developers can focus on the needs of the disability group they serve.
How Standards Fit

There is one obvious question to answer.  Why do internal AsTech products fail while external AsTech products succeed?  The three Internal Model products we described were high quality products that were well received by their disability groups. No external AsTech existed that met the same need.  

I believe there are two answers to this problem. First, external AsTech makes compliance easier for developers, and second, there is little motivation to open proprietary App software to accessibility extensions.  In this respect it is standards that let us down.

The W3C Web Accessibility Initiative sets the standards for web accessibility.  W3C standards eventually become law. Hence, these standards have the ability to include or exclude disability groups for information access.  Currently, VR/LV is excluded from W3C Standards protection, and from national laws based on W3C standards.  W3C standards either ignore internal AsTech or do not protect internal AsTech with the same force they give for external AsTech. 

One critical concept defined in the Web Content Accessibility Guidelines (WCAG 2.0) is accessibly support. Web content conform to WCAG 2.0 if users' assistive technology exist for the content and the AsTech is available to users. Without either of these factors, content does not conform to WCAG 2.0.  The problem with accessibility support is that the WCAG Working group never finished the job. 

The document defined the concept of accessibility support, but the WCAG Working Group never rigorously studied exactly what accessibility support entailed for various disability groups. The WCAG Working Group does not even consider examples AsTech they highlight in normative sections of the guidelines to be necessary for accessibility support.

Currently, vendors are able to cite ineffective external AsTech as accessibility support where effective internal support is needed.  The most prominent example of ineffective external support for reading content is the screen magnifier. This technology works well for some tasks, but it is ineffective for visual reading with low vision.  Oh yes, you can read with a screen magnifier.  It is much better than nothing.  That is just like you can use an out house and prefer it to a bush.  But it is extremely week access, and very unequal.  Fully sighted users get indoor plumbing; partial sight gets the outhouse, and say thank you.  In the absence of meaningful guidance for accessibility supports inaccessible content and applications will continue to claim conformance to WCAG 2.0 accessibility by claiming support form ineffective external AsTech.

The ATAG and UAAG documents are not done.  You must look at what effective accessibility support is in reality.  Is is equally effective treatment, or is is an inferior crumb that is thrown to a disability group. 

Accessibility support for content and authoring tool GUIs requires user control of visual typographical style.  There are 30 ways to get low vision and over 15 systems of the eye and brain that can be attacked.  Visual readers with low vision are not a single disability they are a cluster of disabilities.  What helps one hurts another.  SC 1.4.3 is a prime example.  It helps people who need elevated contrast and hurts people with photo sensitivity. 

WAI working groups are not bad for this limitation in vision, that is just the way people think about AsTech.  The screen reader was such a powerful breakthrough, it is just plain difficult to think of another model. The key mistakes were these.  WAI did their very best to write standards that would accommodate extensive change in App technology, but the they did not see the model changes in AsTech that would be needed to support new technologies or all disabilities.  WAI sees a fluid world of applications and a static world of assistive technology.  Also, in reality, WAI lumps visual impairment into one group.  The accommodations for blindness are obvious and they lend themselves to technology that is easier for developers to address.  A readjustment of a GUI is very difficult.  It requires Internal Model AsTech, and developers are not keen on opening up to that access.  The WAI can change this.
The Problem Can Be Fixed

UAAG and ATAG are still drafts.  By insisting on Accessibility Support regardless of the model of the AsTech, these standards can force vendors that cannot provide meaningful external support, to become more friendly to solicit effective internal plug-in AsTech. 

The WCAG Working Group never followed up was the interpretation of accessibility support.  It is time for the ATAG and UAAG drafts to take this seriously.  If a content or authoring tool has accessibility support it is accessible, if not it is not.  Since no external model support exists for visual readers with low vision,  it seems like products that do not open up to internal AsTech support are not going to be accessibility supported for visual readers with low vision.  The WCAG Working Group does not acknowledge this issue.  But ATAG and UAAG can close the gap. 

Accessibility support must be more prominent in the ATAG and UAAG than it was in WCAG.  I consider Level A to represent critical access needs that pose substantial barriers if unmet.  Visual access to typographical style, at the element level to font face, color (back and fore), spacing (line, word and letter) and font size with smart word wrapping is Level A access.  How else can you help a group that is composed of many maybe hundreds of sub populations.  You must allow the visual user with low vision to change the GUI (content and software) to meet their individual needs.  HTML + CSS is the model.  Internal Model AsTech is a means.  Please tighten up ATAG and UAAG enough make accessibility support for this Level A a requirement that vendors cannot escape.

One personal comment.  I grew up in the 50's and 60's.  I used magnifying glasses.  Reading always hurt.  Even though I am PhD level mathematician, I could not program until technology moved from punch cards to terminals.  I know that I am very smart.  What about the others, the ones who could not adapt like I did.  Good people, scholars, but not quite a bright.  What do they do.  Currently, the technology exists to eliminate the print disabilities caused for most people in VR/LV.  This is a growing population as baby boomers move on to age related low vision.  Right now WAI leaves this group out. 

I know equally effective access is possible and it does not exist today.  In my 63 years I've tried all the standard AsTech.  I can also write effective user style sheets.  So I know what is possible, unlike others.  The visual reader with low vision needs CSS type access at the document element level and at the App control level.  Please help with this.  It is just criminal to let this historic opportunity pass.

-----Original Message-----
From: Wayne Dick [mailto:wayneedick@gmail.com] 
Sent: November 18, 2011 9:27 AM
To: Richards, Jan
Subject: Re: Submission re: ATAG?

Hi Jan,

Sorry for the delay.

Here is my comment to ATAG.


Received on Thursday, 22 December 2011 19:42:21 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 19:38:22 UTC