W3C home > Mailing lists > Public > public-canvas-api@w3.org > July to September 2009

Fw: Follow Up Call - HTML Canvas Accessibility

From: Richard Schwerdtfeger <schwer@us.ibm.com>
Date: Thu, 27 Aug 2009 07:49:24 -0500
To: public-canvas-api@w3.org
Message-ID: <OF32DA490B.7B6AE04C-ON8625761E.0074D44B-8625761F.004670E7@us.ibm.com>
I wanted to log this discussion.

Rich Schwerdtfeger
Distinguished Engineer, SWG Accessibility Architect/Strategist
----- Forwarded by Richard Schwerdtfeger/Austin/IBM on 08/26/2009 04:16 PM 
-----

Richard Schwerdtfeger/Austin/IBM
08/24/2009 08:21 AM

To
Maciej Stachowiak <mjs@apple.com>
cc
Charles McCathieNevile <chaals@opera.com>, Michael Cooper <cooper@w3.org>, 
Cynthia Shelly <cyns@exchange.microsoft.com>, David Bolter 
<david.bolter@gmail.com>, Steven Faulkner <faulkner.steve@gmail.com>, 
James Craig <jcraig@apple.com>, Doug Schepers <schepers@w3.org>, David 
Singer <singer@apple.com>, Sue E McNamara/Rochester/IBM@IBMUS
Subject
Re: Follow Up Call - HTML Canvas Accessibility







Rich Schwerdtfeger
Distinguished Engineer, SWG Accessibility Architect/Strategist

Maciej Stachowiak <mjs@apple.com> wrote on 08/22/2009 10:52:00 PM:

> Maciej Stachowiak <mjs@apple.com> 
> 08/22/2009 10:52 PM
> 
> To
> 
> Richard Schwerdtfeger/Austin/IBM@IBMUS
> 
> cc
> 
> James Craig <jcraig@apple.com>, Charles McCathieNevile 
> <chaals@opera.com>, Michael Cooper <cooper@w3.org>, Cynthia Shelly 
> <cyns@exchange.microsoft.com>, David Bolter 
> <david.bolter@gmail.com>, Steven Faulkner 
> <faulkner.steve@gmail.com>, Doug Schepers <schepers@w3.org>, David 
> Singer <singer@apple.com>, Sue E McNamara/Rochester/IBM@IBMUS
> 
> Subject
> 
> Re: Follow Up Call - HTML Canvas Accessibility
> 
> 
> On Aug 22, 2009, at 1:27 PM, Richard Schwerdtfeger wrote:
> 
> > James,
> >
> > I am having Mike Smith re-schedule the call from the HTML working 
> > group. I want to discuss the personalization piece. That has not 
> > been flushed out at all for HTML 5. This can be discussed in 
> > parallel to what Doug is doing. I don't like the author having to 
> > write script to help the author choose and render the alternative 
> > content.
> >
> Can you expand on what you mean by the "personalization piece"?

Hi Maciej,

This will be long so I apologize for the long response ...

Yes, what is accessible is dependent on the individual. For example:

- If you have a video and it is closed captioned in english the spanish 
speaking deaf user still has an inaccessible solution
- If you are presented with a map and you are blind you have an 
inaccessible solution that may be replaced by HTML driving directions
- IF you have a map a canvas element which is in Spanish you may need to 
the french equivalent as you don't speak Spanish
- A learning impaired user may find the content of the drawing too complex 
and if there is a simplified alternative they may prefer that.

The IMS Global Learning consortium has defined meta data for resources 
that could be mapped to user preferences (they have the same general
vocabulary). I posted a link on the last call. 

What is very important about canvas is that due to the intense graphical 
nature of it we cannot expect to always have a one size fits all 
accessible solution. At times it may be impractical to make a complex 
visualization accessible to all users. Rather, the ability to specify an 
equivalent resource may be better. It may also mean that there may be more 
than one resource available. Currently, HTML 5 does not allow for that. 
The ability to have the user specify they type of resource they would want 
and have the Web adapt to it means that we would now have a flexible Web. 

How we get the user preferences to the content can be performed can be 
done using a number of vehicles. My thoughts are that it can be delivered 
via local storage now available in HTML 5. See the personalization roadmap 
for the ubiquitous Web Activity: 
http://www.w3.org/TR/2009/NOTE-UWA-personalization-roadmap-20090409/

What we are looking to do in the Ubiquitous Web application working group 
is merge the user preferences of AccessForAll with the delivery context of 
the device. This allows content delivered in the browser to adapt to the 
environment the user operates in as they go throughout their day. This 
goes beyond accessibility issues but rather goes toward delivering a 
flexible Web to all users. 

At this moment we are taking the resource metadata and user preferences in 
the IMS Global Learning Consortium and boiling those down to a smaller 
core set that we could deliver through the browser, in version 2.0, along 
with the device delivery context capabilities. 

Delivering the preferences through the browser, via local storage, creates 
a number of benefits to the user:

The user has the option of passing them off to the domain content provider 
if they choose (leverage the security features of HTML 5 local storage)
The user has full control of the user and device preferences passed.
It makes devices, like an IPhone, very powerful as the user can truly 
specify how Web content can be delivered to the device.
By using the device as the preference delivery vehicle the user can have 
content adapted to their environment where it would not be possible if 
these preferences were housed on a server away from the environment the 
user operates in.
At times what is accessible may in fact be determined by the environment 
the user operates in. (heavy background noise, low lighting conditions, 
etc.)

I was looking to have mashup servers associate content capability metadata 
with resources but given the situation with canvas, video, object tags, 
etc. it would seem appropriate that HTML have the capability to store the 
meta data with elements and have the browser render the content specified 
by the user.

I doubt we can get all of the meta data in HTML 5 but perhaps we might 
choose a small set that we could grow in a refresh. Somehow we need to 
have the browser choose which rendering gets displayed without the author 
having to write script to allow the user to choose. 

Does this help?

> >
> > I also am not sure we have the API discussion put to bed. I don't 
> > know that we can discuss the API piece on the follow up call as I 
> > need to ask how product teams like the DOM writing approach vs. the 
> > total API approach. We also don't have:
> > - a caret position unless you specify contenteditable.
> > - a text selection attribute (aria-selected was not intented for 
> > text selection). We would need to expand that for rich text.
> >
> > What concerns me is that canvas authors may not like using the DOM 
> > for full implementations of things like rich text editing. If they 
> > were fine with it then I would be more than pleased.
> >
> 
> I agree that text editing (and in particular caret position and 
> selection) are a tricky issue. I know we have some volunteers working 
> on prototyping. Will one of the prototypes involve text editing done 
> with the canvas? We could use that to assess the need to additional 
> text-related accessibility APIs.
>
Yes. I believe that would get us far enough to address HTML 5. Between 
native HTML elements and ARIA we may be able to without additional APIs. 
 
> And finally: based on progress so far, can we pull in the deadline on 
> making a proposal to the HTML WG? Currently the action item on this is 
> due in December. I believe that with our current plan, we could make a 
> full proposal by the end of September. Would anyone object to giving 
> the HTML WG September 30th as a tentative deadline? Note: we can 
> always update the deadline again if we discover difficult problems in 
> the course of prototyping.
> 
If we can make good progress on addressing the issues above and we can 
make headway on ARIA integration issues (We have a few WAI PF needs to 
address with Ian) we may be able to move in the time frame. The reason I 
mention ARIA is that our canvas solution depends on it. We have some 
implicit semantics issues to address. 

Rich

> Regards,
> Maciej
> 
Received on Thursday, 27 August 2009 12:50:23 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:31:48 UTC