Re: draft of hit testing on active regions (actually paths) in canvas

On Fri, Oct 28, 2011 at 1:02 AM, Charles Pritchard <chuck@jumis.com> wrote:
>> On Thu, Oct 27, 2011 at 7:27 PM, Charles Pritchard<chuck@jumis.com>
>>  wrote:
>>>
>>> On 10/27/11 5:42 PM, Jonas Sicking wrote:
>>>>
>>>> Some feedback:
>>>>
>>>> Limiting to only allowing paths seems like a unfortunate limitation.
>>>> For example it misses calls to drawImage which I think is quite
>>>> common. I'd rather prefer a call where you say "all drawing operations
>>>> from this point should be considered drawing for element X", then let
>>>> the page do arbitrary drawing operations, then have a second call
>>>> which says "i'm now drawing for element Y" or "I'm now drawing for no
>>>> element".
>>>
>>> I'm concerned that the setDrawingFor proposal requires that authors run
>>> fill
>>> or stroke to operate. This would require additional steps for authors
>>> setting an interactive region:
>>>
>>> // Using setDrawingFor without painting
>>> ctx.save();
>>> ctx.setDrawingFor(element);
>>> ctx.fillStyle = 'rgba(0,0,0,0)';
>>> ctx.fillRect(0,0,20,20);
>>> ctx.setDrawingFor(null);
>>> ctx.restore();
>>
>> Why are the extra .save() and .restore() calls needed here? What would
>> go wrong if they were left out?
>>
>
> I'm setting the fillStyle in this example to transparent black. Essentially
> a no-op.
>
> If I'm doing this within a drawing loop, the current value of the fillStyle
> may be in use and may be used by subsequent code.
>
> So I'm saving my current state (my current fillStyle) prior to marking the
> region as interactive.
> I usually abstract the setting of fillStyle early on when patterns and
> gradients are involved.

Ah, so this is only needed because you are issuing these drawing
commands specifically to get a11y coverage.

This is not how I had envisioned that people use this new API. I think
people should be able to use the drawing commands that they are
already using and just make them accessible. Measured that way it
seems like setDrawingFor will result in smaller modifications to
existing code than setPathForElement.

>>>> In general I think I prefer the API I proposed in [1]. Can you
>>>> describe what problems you were trying to solve with your changes
>>>> compared to that proposal?
>>>>
>>>> [1]
>>>>
>>>> http://lists.w3.org/Archives/Public/public-canvas-api/2011JulSep/0195.html
>>>
>>> My primary concern here is path fidelity. Apart from the extra steps I
>>> outlined earlier, would a mature setDrawingFor implementation keep all of
>>> the paths for accessibility?
>>>
>>> setPathForElement instructs the implementation to maintain the already
>>> constructed path on some kind of stack or other ordered object.
>>>
>>> setDrawingFor would -possibly- maintain a series of paths opposed to a
>>> series of subpaths.
>>>
>>> Basic implementations are most likely to just keep bounding box
>>> information
>>> in either case. More advanced implementations will maintain some kind of
>>> additional information about the interactive region.
>>
>> I don't think keeping just bounding box information should be
>> permitted. My idea is that when the user clicks a<canvas>  which has
>> used the setDrawingFor API, then we forward the click to the actual
>> elements inside the canvas. Similar to how labels work. I.e. any event
>> handlers registered on the element would fire. If the element is a
>
> Yes, I like that idea.
>
>> <input type=checkbox>  then it would flip its "checkedness" etc. But
>> only pixels drawn while setDrawingFor is in effect acts as a
>> forwarding area.
>
> This kind of implementation requires keeping a bitmap for each interactive
> element.
> It can be a small bitmap, just 1 bit per pixel.

That would be one way to implement it. Another implementation strategy
would be for to just store vector information for each issued drawing
command. This information is then used to do hit-testing internally in
the implementation. So for a path-based drawing command the
implementation would remember the path. For a drawImage command the
implementation would remember the coordinates that make up the square.

This is similar to how we implement image maps in Gecko where each
<area> is parsed into a set of coordinates which are then used to do
hit testing.

> This method works for simple UIs.

Why wouldn't it work for complex UIs?

> It looks kind of like this, when implemented directly in canvas by authors:
> mouseContext.getImageData(x,y), get the pixel color, use that pixel color
> to access the appropriate listener in an event object.
>
> This method is fine for authors, but I don't recommend it for UAs and it's
> not that nice for ATs.
> It's actually helpful for sighted authors, sometimes, as they can set the
> mouseContext canvas to display visibly and  see the interactive regions in
> color.
>
> A UA, and a more advanced app will maintain a list of paths and use various
> indexing schemes to ensure
> that list can grow and remain performant. It requires far less memory and is
> easily serialized. It does have a slight trade-off for CPU usage, of course.

Indeed. That is exactly the implementation strategy I would recommend.
However the API doesn't require any particular implementation
strategy, as long as it satisfies the desired behavior (exact hit
testing) and a11y requirements (being able to find bounding box for
screen magnifiers).

> It's far more likely that vendors would re-use their existing hit-testing
> code from SVG implementations.

Yup. Or the hit-testing code from image maps.

>> This way authors have a much greater incentive to use the
>> setDrawingFor API. I.e. they would get much simpler event handling for
>> all their users. The fact that we'd be able to use the same
>> information to drive screen magnifiers etc is "just" a convenient side
>> effect from the authors point of view. This is the type of API which
>> has worked the best historically. I.e. bolt-on solutions which are
>> there only for a11y users has seen much less usage than APIs which add
>> enough semantics to help all users. (This is why we prefer if people
>> use HTML based UIs, rather than canvas based ones after all, right?)
>> (Note, i'm not saying that bolt-on is bad or should be avoided. But
>> when we can find better solutions, I think we should use them).
>
> The setPathForElement method is providing the same incentive: simpler event
> handling for users, while also driving screen magnifiers and other spatial
> AT, such as AT running on touch-based devices.
>
> I'm afraid there may be a miscommunication, as I don't believe either of us
> have suggested a "bolt-on solution".
>
> We're looking at the same pointer semantics for both proposals. Setup an
> interactive region to delegate pointer events to an element within the
> Canvas sub-tree.

Hmm.. so is the intent for setPathForElement that it would forward
clicks to the appropriate element inside the canvas too? If so, it
doesn't seem like a bounding box would be enough to remember for the
implementation.

If setPathForElement is indeed intended to work the same way as
setDrawingFor, then it simply seems like a question of how much
convenience we supply for page authors. I.e. do we request that they
construct paths for all drawn "things" that they want hit testing for,
or do we let them use any drawing command.

My preference would be the latter.

>> Similarly, this would act as a way for people to more easily add
>> different tooltips for different parts of a canvas, or to get a
>> appropriate mouse cursor when the user is hovering areas of a canvas
>> which draws pixels corresponding to a<a>  anchor.
>
> Yes, the corresponding element may have a wealth of semantic information
> such as the full weight of the ARIA spec. That's why it's so important for
> pointer events to be supported. Keyboard events are well supported via
> drawFocusRing and associated focus management requirements.

Indeed.

>>> These are two very different methods, I like them both. setPathForElement
>>> is
>>> less expensive to implement; less lines of code changed. setDrawingFor
>>> could
>>> trap -more- information, relating more to SVG interop than to a11y.
>>>
>>> setPathForElement would simply use the current path, add it to a stack,
>>> and
>>> that stack would be accessible by the UA accessibility APIs. It's
>>> feasible
>>> that the path would be serialized and sent to a supporting AT. Otherwise,
>>> it'd simply be used for its bounding box information. This requires no
>>> changes in existing Canvas methods, only the addition of new methods.
>>>
>>> setDrawingFor would require hooks to be added to most drawing methods. It
>>> may require additional logic for strokeWidth. An advanced implementation
>>> may
>>> collect drawing calls while setDrawingFor is running, serializing them
>>> into
>>> SVG and adding them into a Component DOM. This would be a lot of extra
>>> work
>>> on the CPU. At this point super-computers fit in our pockets. It'd be
>>> reasonable that toDataURL('image/svg+xml') would return a scene-graph
>>> from
>>> the Component DOM.
>>
>> The fact that it requires hooks in all drawing methods is a feature
>> IMHO. It ensures that it's easy for the developer to make *all* pixels
>> drawn for an element clickable, no matter if the pixels are drawn
>> using a path-based API, or some other API.
>
> I'd imagine that "pixels" in this case includes fully transparent pixels.
>
> That is, when a user runs drawImage, they are filling out a rectangle,
> regardless of the content contained in the image call. Is that correct?

Yup.

>> I think we'll see significantly more usage if all the author needs to
>> do is to sprinkle setDrawingFor calls over the code to mark which
>> drawing calls are made for which elements. If setPathForElements
>> require authors to create new paths specifically just to mark bounding
>> boxes then I think fewer people will use it.
>>
>
> setPathForElement is less work in the code bases I work on.

Same in Gecko. But I think it's more work for websites.

> I've shown a few examples of how and when it is more appropriate.
>
> Consider a simple button. In most cases of Canvas based UI, the author need
> only convey the bounding box of the button, or the outer shape of the
> button, the rest of the drawing calls are entirely presentational.
>
> That presentational information is certainly useful when/if serialized into
> an SVG document. I do feel that setDrawingFor has a strong tie to the
> component model being discussed in other lists.

In the button UI example you are talking about, why wouldn't the page
simply add a setDrawingFor(element) call before the code it already
has to draw the button, and maybe a setDrawingFor(null) after?

If it's really concerned about only getting the outermost bounding box
covered, it can add move the setDrawingFor calls to be just around the
code that draws the button background.

> Here is a basic example of supporting both methods in existing clients:
>
> // NOTE: real-world use would include polyfill methods for legacy clients
> (using CSS overlays).
>
> // setDrawingFor for legacy clients
> var hasDraw = Boolean('setDrawingFor' in ctx);
> if(hasDraw) ctx.setDrawingFor(element);
> ctx.fillRect(0,0,100,100);
> if(hasDraw) ctx.setDrawingFor(null);
>
> // setPathForElement for legacy clients
> var hasDraw = Boolean('setPathForElement' in ctx);
> ctx.fillRect(0,0,100,100);
> if(hasDraw) ctx.setPathForElement(element);
>
> // end.

I'm not sure that what this example intends to show?

> I can more easily introduce setPathForElement to existing Canvas projects as
> well as existing Canvas implementations. It's intended to operate with the
> same positive result as setDrawingFor -- pointer events are marshalled onto
> the target element.

Until someone speaks up and says "that proposal is too complex to
implement" I don't think we should worry about implementation
complexity and instead look at what's most useful for page authors.

/ Jonas

Received on Friday, 28 October 2011 20:13:21 UTC