Re: Reasons of using Canvas for UI design

On 7/28/2011 5:04 PM, David Singer wrote:
> On Jul 28, 2011, at 16:48 , Charles Pritchard wrote:
>
>> ...to get bounding box information
>> and to send focus and click events.
>>
>> ...manage the canvas subtree
> You are seem to be assuming a bounding-box/sub-tree model, and I my point is that that assumption does not and will not hold for all canvas-based applications, and I give an thought-example of one where it does not.

I do have a focus on existing accessibility APIs. Current APIs assume 
some kind of location data.
There are interactive regions in the canvas, those areas can be 
described verbally.

The sub-tree model relates to semantic content. (example: "<ul><li><li>" 
or aria-owns/aria-labelledby, etc)

I do believe that accessibility APIs could/should be improved in the 
future. That's something
I'd like to explore in subsequent versions of ARIA.


>>> One thought experiment I took was this: the canvas represents a bio-reactor, in which various populations of algae, funghi, bacteria etc. are competing. The user can interact with it using a 'virtual pipette' -- adding to the pipette by picking up material in one place, and dropping material from the pipette somewhere else (e.g. by right and left click-and-hold). All the while, the organisms are reproducing, spreading, dying, shrinking, co-existing, etc. In this, there are no 'paths' to test against; rather the application is modelling a fluid situation. The user can learn what the effect of dropping various populations in the midst of others, is. Apart from a legend "right-click and hold to pick up, left-click and hold to drop" (outside the canvas) how does the application convey what is being picked up, what's in the pipette, and what's going on in the reactor, to an accessibility-needing user?  "There is a check-box under the mouse, which says "Remember me"" comes nowhere close. This application is not path-based, is not using 'standard controls' and so on.
>> This is what the ARIA language was setup for, and will continue to work on.
>>
>> In your application, the placement of the materials are indeed based on spatial regions. They are path based.
> No, they are not.  There are no regions with well-defined borders that contain only one active element. Every point in the space has a mix, and the proportions in the mix vary continuously over the space.  You can pick up from one place, add that mix to the mix in your pipette, and drop some of that mix somewhere else. Where is there a path with well-defined borders?

There are regions: your example is certainly distinct from the use cases 
that I'm actively working to solve, but it does not fall
so far outside of existing accessibility APIs, and testing, as to be 
indescribable with the vocabulary we currently have access to.

Your pipette may be in a particular location, for you to grab in the 
first place. It is in a region.
Your material is in another region.  The "look" or properties of the 
bio-reactor
should be available -- even if it just says "Bio-reactor with a large 
density of green sludge".

http://www.w3.org/TR/WCAG20/
2.2 Provide users enough time to read and use content.
2.4 Provide ways to help users navigate, find content, and determine 
where they are.

I use my pipette region to move over to a region holding poison, then
suck up some of that poison, and move over to the bio-reactor, I've now 
crossed
three regions.

I drop the poison into the reactor, I'm informed that the poison has hit 
the reactor,
I'm informed that the population count has started to drop. I know where 
I have
dropped the poison, in the reactor. Perhaps I dropped it at the edge. I 
can hit pause
on the simulation now. I can move over to the edge that I dropped it on. 
That's another defined region.

Now I've  got four regions. I move slightly more inward, to the next 
concentric zone,
and I'm told that a portion of poison has reached this zone, and the 
zone has changed colors.

I move further, no poison has reached this zone.  I press a key, to step 
forward in the simulation,
I continue to drag my pointer around the reactor, studying the effects 
of the poison.

Even with an AT that is limited to bounding boxes -- and I certainly 
hope VoiceOver does better on the iPad --
I am able to get a good deal of information without vision, by using a 
pointer device.

And odds are, if I wanted this to be easy, I'd be using a touch screen.


Does this address your thought experiment? I am doing my best to 
understand your parameters,
and to engage in your experiment, developing the eyes-free user 
interface to WCAG standards.

>> Consider this visual application you're speaking of -- how would it look in HTML 3.2? That's where I would start, for accessibility.
> It cannot be done there, at least I can't see how.
>

Simulations can be "done" in HTML forms. They are merely input and 
output data points.
The simulations take in various data types, ints, strings, floats, etc. 
Those can be represented in HTML 3.2.

They can be done from the console, too. Though that's not quite relevant.

What are the components in your experiment? There are pipette's. I might 
pick one up. That's a button.
There are a collection of substances I can use, that's a select (or 
checkbox, if you like, with input amounts).

There is the bioreactor. That's going to be a table, giving you the 
sensory data that you have at hand.
It may include other information, such as a refresh button, to refresh 
the information, and a timestamp.

And sure, there may even be a picture of the reactor. As far as 
scientific experiement goes,
the information recorded and reported by sensors is more important than 
the picture that you are given.


I do hope I'm speaking to your thought experiment. I've never worked 
with a reactor,
but if I were going to build an interface to one, this thread details 
how I would approach doing so.

I popped open this page to help me:
http://compost.css.cornell.edu/soda.html


-Charles

Received on Friday, 29 July 2011 00:28:08 UTC