- From: Richard Schwerdtfeger <schwer@us.ibm.com>
- Date: Thu, 30 Jun 2011 13:39:44 -0500
- To: "Tab Atkins Jr." <jackalmage@gmail.com>
- Cc: Paul Bakaus <pbakaus@zynga.com>, public-canvas-api@w3.org, public-canvas-api-request@w3.org, Doug Schepers <schepers@w3.org>
- Message-ID: <OF21AFA288.6D06CBA1-ON862578BF.0064C4BE-862578BF.006683BA@us.ibm.com>
Rich Schwerdtfeger CTO Accessibility Software Group "Tab Atkins Jr." <jackalmage@gmail.com> wrote on 06/30/2011 12:53:34 PM: > From: "Tab Atkins Jr." <jackalmage@gmail.com> > To: Richard Schwerdtfeger/Austin/IBM@IBMUS > Cc: public-canvas-api@w3.org, public-canvas-api-request@w3.org, Doug > Schepers <schepers@w3.org>, Paul Bakaus <pbakaus@zynga.com> > Date: 06/30/2011 12:54 PM > Subject: Re: You Got Your SVG in my Canvas! Mmm, Delicious! (was: > hit testing and retained graphics) > > On Thu, Jun 30, 2011 at 9:59 AM, Richard Schwerdtfeger > <schwer@us.ibm.com> wrote: > > "Tab Atkins Jr." <jackalmage@gmail.com> wrote on 06/30/2011 11:18:39 AM: > >> From: "Tab Atkins Jr." <jackalmage@gmail.com> > >> To: Richard Schwerdtfeger/Austin/IBM@IBMUS > >> Cc: public-canvas-api@w3.org, public-canvas-api-request@w3.org, Doug > >> Schepers <schepers@w3.org> > >> Date: 06/30/2011 11:19 AM > >> Subject: Re: You Got Your SVG in my Canvas! Mmm, Delicious! (was: > >> hit testing and retained graphics) > >> > >> On Thu, Jun 30, 2011 at 5:49 AM, Richard Schwerdtfeger > >> <schwer@us.ibm.com> wrote: > >> > > >> > Tab, > >> > > >> > You were explained to more than once what problems we are trying to > >> > solve. > >> > > >> > 1. We need a vehicle to tell an assistive technology the position > >> and bounds of an object on the drawing space. > >> > 2. In EVERY operating system since 1994 this has been tied to > >> retained mode graphics information. In these systems hit testing was > >> tied to the same information in retained mode graphics used to > >> supply platform accessibility APIs. > >> > 3. Authors have been asking for hit testing > >> > >> This is why I keep pounding on the "define the problems you're solving > >> first" thing. You keep trying to jump straight to the solution. Your > >> #1 and #3 are good problems to solve. > >> > > > > OK. I think I gave you the problem I am trying to solve for accessibility. > > EVERY platform accessibility API on EVERY platform provides the bounds of an > > accessible object to an AT for the reasons I stated. No additional problem > > is being stated here. > > > > Zynga and others want a faster way to do hit testing by the browser. They > > essentially have to do their own hit testing. To do hit testing you have to > > know where all the objects are. I have also provided to you some of work > > arounds authors. > > > > Zynga would like to minimize the number of DOM elements used in the canvas > > subtree. This is in conflict with accessibility because we need to have an > > accessible object produced from the DOM that represents drawing objects on > > canvas. Zynga believes accessibility is important but their priority for > > their users is speed. So, the solution may mean to be such that 1 or more > > drawing paths are assigned a unique ID and those paths are each wired to one > > or more DOM elements that would process the events. The purpose of the ID > > would be such that the DOM subtree element can be made aware of what drawing > > object received the hit. > > Again, define your problems. Making your average game, such as the > ones produced by Zynga, accessible to the blind, for example, can > *not* be accomplished by exposing an alternate subtree. It can > theoretically be done, but only be exposing completely different > interaction modes, which are most likely fairly tightly coupled with > the design of the game itself. Different disabled subgroups require > different interaction modes. > I am not trying to solve the gaming problem for the blind. And something you have a problem understanding is that the main problem we are solving is > There *are* accessibility problems that can be solved by exposing an > alternate subtree (or possibly by other solutions). Until you list > those problems, we have no way to tell how good your proposed solution > is. > Alternative solutions depend on infinite number of problems and I am not inclined to boil an ocean with you. We are simply providing the ability to map visual objects and their relationships to platform accessibility API. New 508 government requirements in development require a whole littany of those. That is the problem we are trying to solve here. We are not trying to build new applications with different interaction models. I am not trying to solve accessible gaming but if I were a low vision user I would need to be able to find the damn object on the screen with their magnifier! It does not matter what the interaction model is. That would be like you trying to play Farmville on a 60 inch screen looking through a shot glass butted up against the screen. Try it some time and go find the shovel. > > >> Note that your #2 is suggesting that, based on history, a correct > >> solution is likely going to be based on a retained-mode API. *I 100% > >> agree.* However, without a proper list of problems, there's no way to > >> evaluate the possible solutions. They may not even all be solved by > >> the same thing. > >> > > Well, now you have them and I have added some additional requirements for > > Zynga. > > > >> For example, hit-testing is a useful thing to solve. More importantly, > >> it's: > >> > >> 1. Self-evidently useful to the author that needs it > >> 2. Immediately apparent when you're doing it wrong > >> 3. Completely safe to ignore if you don't need it for your app > >> > > Now you are providing requirements that make it more acceptable toyou. That > > is the first time we have heard of these. We are not mind readers and this > > dreadfully laborious way of communicating via snail mail is not a lot of > > help. > > O_o > > If these aren't already an intrinsic part of your thought process, > something is very wrong. These are pretty basic API design > principles. > > > > 1. Authors needing to support accessibility will be required to use the hit > > testing vehicle with an explanation as to why. > > The vast majority of authors don't need to support accessibility. > They find it perfectly acceptable to exclude the relatively small > fraction of people that can't use their app, as they can then instead > spend their effort on making the app better for the much larger group > of people that are fine with the interaction model. Thus, this does > not satisfy #1. > > > > 2. At least for accessibility, if you are unable to determine the proper > > location of the canvas sub tree DOM element associated with the drawing > > object on canvas then a magnifier will be unable to zoom to that location. > > Also, if you create a path and it is not assigned to a valid DOM element you > > will definitely fail. Also, if in hit testing the associated DOM element is > > removed or hidden it should also fail. > > The vast majority of authors will never use a magnifier. If that's > the only way for them to tell they've done it wrong, this does not > satisfy #2. > > > > 3. If you don't want to use hit testing on drawing objects don't add them > > and canvas will in fact route the hit events off the canvas element like it > > does now. If an author never introduces a drawing path that needs to be hit > > tested then > > If you do hit-testing yourself by listening for clicks on the > <canvas>, users can't use a magnifier to zoom into the active area > (something you implied was a problem to be solved in the previous > point). Thus, this does not satisfy #3, > > > > I don't see the harder to solve problem. I will say that without ability of > > an assistive technology to find the objects and assess their dimensions > > breaks interoperability with platform accessibility APIs on all platforms. > > Without the information a user agent cannot fulfill its accessibility API > > obligations need to drive a magnifier or a screen reader that outputs > > Braille. > > I agree. That does not automatically imply that the correct solution > is a minimally-invasive grafting of retained-mode features onto the > canvas 2d context. > > ~TJ
Received on Thursday, 30 June 2011 18:40:23 UTC