- From: Patrick H. Lauke <redux@splintered.co.uk>
- Date: Mon, 09 Mar 2015 16:00:59 +0000
- To: public-pointer-events@w3.org
On 09/03/2015 15:12, Rick Byers wrote: > Hi, > In working on my InputDevice proposal > <https://docs.google.com/a/chromium.org/document/d/1WLadG2dn4vlCewOmUtUEoRsThiptC7Ox28CRmYUn8Uw/edit#>, > I was digging into Windows lower-level pointer API and was surprised to > find that it bundles movements of multiple pointers into a single event > (like touch events does, but unlike W3C Pointer events). Is this related to Gesture Events https://msdn.microsoft.com/en-gb/library/ie/dn433243%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396 (which obviously aren't part of Pointer Events, and have so far not - if ever - been submitted to W3C standardisation) > This got me thinking about the differences between these two models. > How would we recommend developers implement smooth multi-touch handling > with the pointer events API? Eg. say I want to smoothly handle rotation > and zooming by tracking two fingers. Should I do my calculations on > every pointermove event, or (for example) in a RAF callback each frame? FWIW I've been giving advice to do any heavy calculations of this kind using RAF (only storing coordinates on each movement, and then on the scheduled RAF call actually do the calculations based on those coordinates), and sometimes even using some form of throttling/debouncing (even for handling touchmove-related stuff), just to avoid swamping lower-end devices. I'd be interested to hear if this is indeed a good approach, and what the potential drawbacks would be. http://patrickhlauke.github.io/getting-touchy-presentation/?full#135 P -- Patrick H. Lauke www.splintered.co.uk | https://github.com/patrickhlauke http://flickr.com/photos/redux/ | http://redux.deviantart.com twitter: @patrick_h_lauke | skype: patrick_h_lauke
Received on Monday, 9 March 2015 16:01:23 UTC