W3C home > Mailing lists > Public > public-pointer-events@w3.org > January to March 2015

Re: Guidance for how best to implement smooth multi-touch behavior

From: Patrick H. Lauke <redux@splintered.co.uk>
Date: Mon, 09 Mar 2015 16:00:59 +0000
Message-ID: <54FDC3BB.7050806@splintered.co.uk>
To: public-pointer-events@w3.org
On 09/03/2015 15:12, Rick Byers wrote:
> Hi,
> In working on my InputDevice proposal
> <https://docs.google.com/a/chromium.org/document/d/1WLadG2dn4vlCewOmUtUEoRsThiptC7Ox28CRmYUn8Uw/edit#>,
> I was digging into Windows lower-level pointer API and was surprised to
> find that it bundles movements of multiple pointers into a single event
> (like touch events does, but unlike W3C Pointer events).

Is this related to Gesture Events 
(which obviously aren't part of Pointer Events, and have so far not - if 
ever - been submitted to W3C standardisation)

> This got me thinking about the differences between these two models.
> How would we recommend developers implement smooth multi-touch handling
> with the pointer events API?  Eg. say I want to smoothly handle rotation
> and zooming by tracking two fingers.  Should I do my calculations on
> every pointermove event, or (for example) in a RAF callback each frame?

FWIW I've been giving advice to do any heavy calculations of this kind 
using RAF (only storing coordinates on each movement, and then on the 
scheduled RAF call actually do the calculations based on those 
coordinates), and sometimes even using some form of 
throttling/debouncing (even for handling touchmove-related stuff), just 
to avoid swamping lower-end devices. I'd be interested to hear if this 
is indeed a good approach, and what the potential drawbacks would be.


Patrick H. Lauke

www.splintered.co.uk | https://github.com/patrickhlauke
http://flickr.com/photos/redux/ | http://redux.deviantart.com
twitter: @patrick_h_lauke | skype: patrick_h_lauke
Received on Monday, 9 March 2015 16:01:23 UTC

This archive was generated by hypermail 2.3.1 : Monday, 9 March 2015 16:01:23 UTC