Re: Review of caret and selection proposal in current canvas API draft

On 1/30/2011 10:09 AM, Benjamin Hawkes-Lewis wrote:
> On Sun, Jan 30, 2011 at 4:45 PM, Richard Schwerdtfeger
> <schwer@us.ibm.com>  wrote:
>
>>> + Suggestion 2: Define effects of non-canvas transformations on
>>> setCaretSelectionRect()
> [snip]
>
>>> An example of the subtleties here would be when a canvas element is
>>> subjected to warping using CSS 3D transforms:
>>>
>> This API is targeted at Canvas and not CSS and SVG. It sounds like you want
>> to extract this API and put it elsewhere.
> That would be good, if possible (James's draft suggests a generic
> focusPosition() method for magnifiers), but it's not what this suggestion is
> about. Rather this is about how behaviors external to the canvas element can
> change where canvas content appears on screen, and so change the information
> that needs to be given to magnifiers if they are going to correctly focus the
> current caret or selection.
I do think there's good work to be done there. My focus is on Canvas, 
but I certainly like
to see a11y cooperation across groups. Some of the lessons and cases 
we've discussed here
could be helpful to other groups.

>>> How must user agents compute a "rectangle" in such a case? For example,
>>> must
>>> they compute the rectangle within the canvas context, then compute the
>>> (transformed) pixel coordinates on the screen, then compute a rectangle
>>> that
>>> embraces all four corners?
>>>
>> You use the drawing path relative to the canvas upper left position. You
>> then add the coordinates of the upper left corner of the upper left hand
>> corner of the canvas. The coordinates of the rectangle are a best fit:
>>
>> Assuming coordinates are relative to the top left of a device context (Only
>> OS/2 did it cartesian coordinate based)
>>
>> - minium top, left coordinate = top left
>> - max bottom, minumum left coordinate = bottom left
>> - max bottom, max right = bottom right
>> - minimum top, maxium right - top right
>>
>> There's your rectangle.
> You *cannot* assume screen coordinates of content drawn into a canvas context
> from the screen coordinates of the top left corner of the canvas element.
>
> For example, imagine you have a canvas element's top left corner at 5,5. And
> you have a selection rectangle at the top left of the canvas context, so also
> at 5,5, and imagine it is 10 wide and 10 long. If the canvas element is not
> subject to any transforms, then we can compute the bottom right is at screen
> coordinates 15,15. But now imagine the entire canvas element has been rotated
> clockwise 90 degrees clockwise using its top left corner as an anchor. If a
> magnifier draws focus between 5,5 and 15,15, it will miss the text it was
> supposed to be focusing as on screen it will be rotated out of the magnifiers
> focus. Things get even more funky if the transform applied to the canvas
> element (or an ancestor element) warps it rather than rotating it.
CSS transforms are part of/defined by CSS -- I think these are out of 
scope for our discussion,
and otherwise implied, in a broad HTML5 context.

I'll look into changing the wording, to better integrate with other 
standards (such as CSS transforms).

>>> + Suggestion 3: Define caretBlinkRate() on a canvas-independent interface
> I did not see a reply to this point. Do you agree or disagree this would be
> better?

I think it might be an alternative: in some of these instances, I'm 
looking at how
the Canvas context works independently of the HTML5 specs.

I'd imagine that if we have success defining a11y APIs within Canvas, 
it'll be easier,
later on, to get broad adoption in other standards, like SVG. Remember, 
our use
cases in Canvas can be quite different than in other standards --- 
there's bound
to be push-back from those groups without specialized use cases.


>>> + Suggestion 4: Allow privacy exemption to caretBlinkRate()
>>>
>>> caretBlinkRate() could be used for profiling users, so UAs must be allowed
>>> not
>>> to share this data if the user does not want to be profiled.
>>>
>> I don't see how a blink rate could be used to profile a user. People may just
>> not like a high blink rate.
> Their system defaults and their expressed preferences are
> information that can be used to track them.
>
> I refer you to the EFF's research into user fingerprinting:
>
> http://panopticlick.eff.org/
>
> If people with disabilities are more likely to have an unusual blink rate,
> then in addition to tracking them, their blink rate can be used to infer
> that they have a disability.
There are a sufficient number of ways to fingerprint users beyond blink 
rate.

An "unusual" blink rate could also be used to do-the-opposite of what is 
intended.
That is, a blink rate of 1s could be "exploited" to flash the screen 
quickly, in an attempt
to harass the end-user. They can do the same kind of harassment on their 
end users
by detecting their browser version (an old version may indicate the user 
is not computer savvy).

I certainly have concerns for privacy, and security. I try to weigh them 
against existing standards.

I'm up for adding a note somewhere, identifying these a11y apis as data 
points which could be used for fingerprinting,
as a hint to vendors.

Received on Monday, 31 January 2011 19:22:23 UTC