Re: hit testing and retained graphics

On Tue, Jun 28, 2011 at 12:11 PM, Charles Pritchard <chuck@jumis.com> wrote:
> On 6/28/2011 11:58 AM, Tab Atkins Jr. wrote:
>>
>> On Mon, Jun 27, 2011 at 4:20 AM, Charles McCathieNevile
>> <chaals@opera.com>  wrote:
>>>
>>> On Thu, 23 Jun 2011 22:28:32 +0200, Tab Atkins Jr.<jackalmage@gmail.com>
>>> wrote:
>>>>
>>>> You are attempting to recreate a retained-mode API in an
>>>> immediate-mode API.  Why is "use SVG" not sufficient for this?
>>>
>>> Because people don't - they use canvas instead. If that were not the
>>> case,
>>> the whole effort to specify canvas would be solving a theoretical
>>> problem.
>>
>> That's not a useful answer.<canvas>  is used for lots of things, of
>> which only a subset are better done in a retained-mode API, of which
>> only a subset are reasonably handled by mapping clicks into a DOM
>> node.
>>
>> I elaborated my objection to this approach in a later email.  This
>> development thrust seems to be happening without any clear use-cases
>> to address, and with a preference for minimally-invasive edits to the
>> 2d canvas context.  These seem very likely to give a bad result that
>> doesn't solve anything well.
>>
>> I don't think we'll come up with a *good* result until we have clear
>> use-cases that we can then solve.
>
> Please elaborate on what defines a "clear" use-case. As far as I've seen,
> we've put
> forward many -clear- use cases.

The WHATWG wiki pages for Video Caption and Modal Dialog use cases
exemplify what is meant by compiling clear use-cases:
* <http://wiki.whatwg.org/wiki/Use_cases_for_timed_tracks_rendered_over_video_by_the_UA>
* <http://wiki.whatwg.org/wiki/Dialogs>

They examine existing usage to discover what features are important,
and give several examples of each.  This way we can tell directly
whether the solutions we're crafting are adequate, by attempting to
recreate the examples with the proposed solution.

Note that not all use-cases are necessarily solved.  For example, in
the captioning case, the "watermark" use-case was purposely not solved
by WebVTT.


> The responses I've seen have been rebuttals,
> asserting that future versions of XBL2 and SVG2 would be better matches for
> the use cases.

It's certainly possible that the best solutions to particular
use-cases involve those technologies.


> With all sincerity, I'd like to provide "clear use-cases" for your review.
> At this point, I've put forward Apple's VoiceOver on Mobile Safari and
> ViewPlus'
> hands on learning products as use cases for setElementPath with element and
> z-index attributes.

I'm not sure what features of Voiceover on Mobile Safari are
important, because I don't have a device that can run Mobile Safari.

For ViewPlus, I presume you're referring to the technologies shown in
the video at <http://www.viewplus.com/solutions/touch-audio-learning/>.
 I can extract the following use-case from that:

1. A user should be able to indicate a portion of a complex image and
get a caption associated with that portion (possibly not visible in
the image).

Are there any other use-cases you believe are expressed in that video?


> I've put forward serialization semantics (stringify) for the current path
> data
> as another means to make development -easier- on web authors and to
> make it easier to send path data between SVG and Canvas elements.

That's a solution to an unnamed problem.  Can you state the problem
you're trying to solve more directly?


> I understand you have a strong objection to having the UA dispatch pointer
> events directly to the shadow dom. Can we treat that as an independent
> issue,
> one which also requires clear use cases?

I don't think I've expressed any objection along those lines.
However, talking about where mouse events are dispatched is definitely
on the "solution" side of things, and should be irrelevant at this
point (except as something to keep in mind as a potential avenue for
solutions).

~TJ

Received on Tuesday, 28 June 2011 20:38:24 UTC