Re: Drawing Tablets

On 8/3/2012 10:09 AM, Florian Bösch wrote:
> On Fri, Aug 3, 2012 at 6:54 PM, Charles Pritchard <chuck@jumis.com 
> <mailto:chuck@jumis.com>> wrote:
>
>     As I understand it, the browsers have mature event queues; and
>     everything comes with a timestamp.
>     We've got requestAnimationFrame as our primary loop for processing
>     the queue.
>
>     To clear a queue (so to speak), I believe one simple removes any
>     associated handlers.
>
> Yeah but that's not how it should work.
> - Assuming requestAnimationFrame might be wrong in case that other 
> events are used to refresh either logic or simulation (such as when 
> redraws only happen upon input)

It's a complex wait mechanism:
onload -> onmousemove -> requestAnimationFrame [until idle] -> onmousemove.

Until we worked with high resolution events, we simply hooked into 
onmousemove. At some point, it became clear that was inefficient
and we tested out setTimeout loops (rAF was not available at the time).

> - Dispatching events individually makes it difficult to work out 
> correlated events, and since the device landscape changes constantly, 
> it'd be easier to adopt new devices for developers if they could do 
> their own correllation according to their needs. For that they need 
> the buffer of queued events to work trough them.

What kind of correlated events are you thinking of?

> - Depending on the use there's a differing "granularities" that a 
> developer might want to implement. This is usually done by filtering 
> the queue according to the applications needs, if receiving events 
> individually, that'd just lead to the developer re-implementing the 
> queue so he can filter it at the appropriate time.

Some of this was discussed as part of the Sensor API proposal. It seems 
that work is being shuffled into a web intents mechanism.
I've not yet experimented with high volume/precision data over postMessage.

>     Are there any times when mouse emulation is not executed with a pen?
>     As much as I've used my pen, it always "moves the mouse".
>     Capture to area is something that's being handled for pointer
>     events. I recently read that support for mouse capture is being
>     contributed to webkit; it's at least, being actively authored.
>     The capture model is based on the full screen request model.
>
> The mouse emulation mode is appropriate when: 1) the tablet is used to 
> interact with a larger area containing interface elements AND 2) it is 
> used singularly AND 3) the screen dimensions match the tablet dimensions.
>
> It is not appropriate when 1) the tablet is used to interact with a 
> limited area (such as a drawing surface) exclusively OR 2) the tablet 
> is used ambidexterity in conjunction with other pointing devices 
> and/or multiple pens OR 3) the screen dimensions do not match the 
> tablet dimensions

Item 3 seems a matter of configuration, I don't think we have anything 
to do on that one.

Item 2 is fun stuff, but at present, only the touch API has touched on 
the concept of multiple pointers.

Item 1 we can do that with pointer lock.

You do bring up a good point, if the web platform did 
concurrent/multiple pointer devices, it'd be nice if it the pointer lock 
API was aware of that situation.
As I understand it, the new release of Windows does have mature support 
for multiple pointers. Support has been available for some time.
The web platform is falling a bit behind in this area. Of course, they 
haven't caught up with pen events yet and those have been around for 
decades.

>
> Regarding #1, this situation arises naturally when drawing is engaged 
> with the limited area, but the area on screen is considerably smaller 
> than the screen, which would result in the artist having to rely on a 
> tiny area on his large tablet to engage in drawing, while most of the 
> tablets surface area lies bare unused. GIMP implements capturing modes 
> to drawing area, although the implementation is lacking and it cannot 
> be switched seamlessly. Photoshop implements capture to area which can 
> be switched seamlessly between mouse control and drawing utensil on 
> surface control.
>
> The analogy to pointer lock is fitting, however the conflation of 
> fullscreen+pointerlock is not appropriate and it is lacking the affine 
> transform aspect of a capture to area mode.

I suspect the affine transform is something that the author ought to be 
processing from nice raw data.
They can use something like the CSSMatrix() object or other maths to do 
transforms.

With a complex Canvas drawing surface, I've had to do about 3 levels of 
transforms anyway.

onmightypenevent(e) {  coordsForNextStep = 
myMatrix.transform(e.arbitraryX, e.arbitraryY); };

Received on Friday, 3 August 2012 17:21:49 UTC