Re: draft of hit testing on active regions (actually paths) in canvas

> On Thu, Oct 27, 2011 at 7:27 PM, Charles Pritchard<chuck@jumis.com>  wrote:
>> On 10/27/11 5:42 PM, Jonas Sicking wrote:
>>> Some feedback:
>>>
>>> Limiting to only allowing paths seems like a unfortunate limitation.
>>> For example it misses calls to drawImage which I think is quite
>>> common. I'd rather prefer a call where you say "all drawing operations
>>> from this point should be considered drawing for element X", then let
>>> the page do arbitrary drawing operations, then have a second call
>>> which says "i'm now drawing for element Y" or "I'm now drawing for no
>>> element".
>> I'm concerned that the setDrawingFor proposal requires that authors run fill
>> or stroke to operate. This would require additional steps for authors
>> setting an interactive region:
>>
>> // Using setDrawingFor without painting
>> ctx.save();
>> ctx.setDrawingFor(element);
>> ctx.fillStyle = 'rgba(0,0,0,0)';
>> ctx.fillRect(0,0,20,20);
>> ctx.setDrawingFor(null);
>> ctx.restore();
> Why are the extra .save() and .restore() calls needed here? What would
> go wrong if they were left out?
>

I'm setting the fillStyle in this example to transparent black. 
Essentially a no-op.

If I'm doing this within a drawing loop, the current value of the 
fillStyle may be in use and may be used by subsequent code.

So I'm saving my current state (my current fillStyle) prior to marking 
the region as interactive.
I usually abstract the setting of fillStyle early on when patterns and 
gradients are involved.



>> // Using setPathForElement without painting
>> ctx.beginPath();
>> ctx.rect(0,0,20,20);
>> ctx.setPathForElement(element);
>>
>>
>>> What is the use case for the zIndex argument? The actual pixel drawing
>>> operations hasn't had a need for that so far and instead rely on the
>>> painters algorithm. It seems better to me to have a direct mapping
>>> between the drawing operations and the accessibility API.
>> It may be unnecessary. I'll do some deep thinking.
>> As you said, authors can typically manage these things.
> They already are if they are using canvas, no?

Yes, but hit testing is often managed in separate code paths.
For instance, we may have a draw and then a draw_mouse method.

I'll look into this further and see if there is justification for zIndex.


>>> In general I think I prefer the API I proposed in [1]. Can you
>>> describe what problems you were trying to solve with your changes
>>> compared to that proposal?
>>>
>>> [1]
>>> http://lists.w3.org/Archives/Public/public-canvas-api/2011JulSep/0195.html
>> My primary concern here is path fidelity. Apart from the extra steps I
>> outlined earlier, would a mature setDrawingFor implementation keep all of
>> the paths for accessibility?
>>
>> setPathForElement instructs the implementation to maintain the already
>> constructed path on some kind of stack or other ordered object.
>>
>> setDrawingFor would -possibly- maintain a series of paths opposed to a
>> series of subpaths.
>>
>> Basic implementations are most likely to just keep bounding box information
>> in either case. More advanced implementations will maintain some kind of
>> additional information about the interactive region.
> I don't think keeping just bounding box information should be
> permitted. My idea is that when the user clicks a<canvas>  which has
> used the setDrawingFor API, then we forward the click to the actual
> elements inside the canvas. Similar to how labels work. I.e. any event
> handlers registered on the element would fire. If the element is a

Yes, I like that idea.

> <input type=checkbox>  then it would flip its "checkedness" etc. But
> only pixels drawn while setDrawingFor is in effect acts as a
> forwarding area.

This kind of implementation requires keeping a bitmap for each 
interactive element.
It can be a small bitmap, just 1 bit per pixel.

This method works for simple UIs.

It looks kind of like this, when implemented directly in canvas by authors:
mouseContext.getImageData(x,y), get the pixel color, use that pixel color
to access the appropriate listener in an event object.

This method is fine for authors, but I don't recommend it for UAs and 
it's not that nice for ATs.
It's actually helpful for sighted authors, sometimes, as they can set 
the mouseContext canvas to display visibly and  see the interactive 
regions in color.

A UA, and a more advanced app will maintain a list of paths and use 
various indexing schemes to ensure
that list can grow and remain performant. It requires far less memory 
and is easily serialized. It does have a slight trade-off for CPU usage, 
of course.

It's far more likely that vendors would re-use their existing 
hit-testing code from SVG implementations.

> This way authors have a much greater incentive to use the
> setDrawingFor API. I.e. they would get much simpler event handling for
> all their users. The fact that we'd be able to use the same
> information to drive screen magnifiers etc is "just" a convenient side
> effect from the authors point of view. This is the type of API which
> has worked the best historically. I.e. bolt-on solutions which are
> there only for a11y users has seen much less usage than APIs which add
> enough semantics to help all users. (This is why we prefer if people
> use HTML based UIs, rather than canvas based ones after all, right?)
> (Note, i'm not saying that bolt-on is bad or should be avoided. But
> when we can find better solutions, I think we should use them).

The setPathForElement method is providing the same incentive: simpler 
event handling for users, while also driving screen magnifiers and other 
spatial AT, such as AT running on touch-based devices.

I'm afraid there may be a miscommunication, as I don't believe either of 
us have suggested a "bolt-on solution".

We're looking at the same pointer semantics for both proposals. Setup an 
interactive region to delegate pointer events to an element within the 
Canvas sub-tree.

> Similarly, this would act as a way for people to more easily add
> different tooltips for different parts of a canvas, or to get a
> appropriate mouse cursor when the user is hovering areas of a canvas
> which draws pixels corresponding to a<a>  anchor.


Yes, the corresponding element may have a wealth of semantic information 
such as the full weight of the ARIA spec. That's why it's so important 
for pointer events to be supported. Keyboard events are well supported 
via drawFocusRing and associated focus management requirements.

>> These are two very different methods, I like them both. setPathForElement is
>> less expensive to implement; less lines of code changed. setDrawingFor could
>> trap -more- information, relating more to SVG interop than to a11y.
>>
>> setPathForElement would simply use the current path, add it to a stack, and
>> that stack would be accessible by the UA accessibility APIs. It's feasible
>> that the path would be serialized and sent to a supporting AT. Otherwise,
>> it'd simply be used for its bounding box information. This requires no
>> changes in existing Canvas methods, only the addition of new methods.
>>
>> setDrawingFor would require hooks to be added to most drawing methods. It
>> may require additional logic for strokeWidth. An advanced implementation may
>> collect drawing calls while setDrawingFor is running, serializing them into
>> SVG and adding them into a Component DOM. This would be a lot of extra work
>> on the CPU. At this point super-computers fit in our pockets. It'd be
>> reasonable that toDataURL('image/svg+xml') would return a scene-graph from
>> the Component DOM.
> The fact that it requires hooks in all drawing methods is a feature
> IMHO. It ensures that it's easy for the developer to make *all* pixels
> drawn for an element clickable, no matter if the pixels are drawn
> using a path-based API, or some other API.

I'd imagine that "pixels" in this case includes fully transparent pixels.

That is, when a user runs drawImage, they are filling out a rectangle, 
regardless of the content contained in the image call. Is that correct?


> I think we'll see significantly more usage if all the author needs to
> do is to sprinkle setDrawingFor calls over the code to mark which
> drawing calls are made for which elements. If setPathForElements
> require authors to create new paths specifically just to mark bounding
> boxes then I think fewer people will use it.
>

setPathForElement is less work in the code bases I work on.

I've shown a few examples of how and when it is more appropriate.

Consider a simple button. In most cases of Canvas based UI, the author 
need only convey the bounding box of the button, or the outer shape of 
the button, the rest of the drawing calls are entirely presentational.

That presentational information is certainly useful when/if serialized 
into an SVG document. I do feel that setDrawingFor has a strong tie to 
the component model being discussed in other lists.


Here is a basic example of supporting both methods in existing clients:

// NOTE: real-world use would include polyfill methods for legacy 
clients (using CSS overlays).

// setDrawingFor for legacy clients
var hasDraw = Boolean('setDrawingFor' in ctx);
if(hasDraw) ctx.setDrawingFor(element);
ctx.fillRect(0,0,100,100);
if(hasDraw) ctx.setDrawingFor(null);

// setPathForElement for legacy clients
var hasDraw = Boolean('setPathForElement' in ctx);
ctx.fillRect(0,0,100,100);
if(hasDraw) ctx.setPathForElement(element);

// end.

I can more easily introduce setPathForElement to existing Canvas 
projects as well as existing Canvas implementations. It's intended to 
operate with the same positive result as setDrawingFor -- pointer events 
are marshalled onto the target element.

setDrawingFor has a "record this" semantic, such as, record the 
following drawing calls into an SVG document which is attached to the 
component model DOM. I suggest putImageData be excluded for the time being.

setPathForElement is more of "send now" semantic, send the current path 
data along with the element. The format of that path data is 
implementation specific. There is already a precedent set for this with 
the drawFocusRing(element) semantics.



-Charles

Received on Friday, 28 October 2011 08:03:25 UTC