- From: Rick Byers <rbyers@chromium.org>
- Date: Mon, 9 Mar 2015 11:12:20 -0400
- To: "public-pointer-events@w3.org" <public-pointer-events@w3.org>
- Message-ID: <CAFUtAY-pHczruebfGkYoO3R0ZWBBRcYaur+XC127h5Jgq2jMnw@mail.gmail.com>
Hi, In working on my InputDevice proposal <https://docs.google.com/a/chromium.org/document/d/1WLadG2dn4vlCewOmUtUEoRsThiptC7Ox28CRmYUn8Uw/edit#>, I was digging into Windows lower-level pointer API and was surprised to find that it bundles movements of multiple pointers into a single event (like touch events does, but unlike W3C Pointer events). This got me thinking about the differences between these two models. How would we recommend developers implement smooth multi-touch handling with the pointer events API? Eg. say I want to smoothly handle rotation and zooming by tracking two fingers. Should I do my calculations on every pointermove event, or (for example) in a RAF callback each frame? If I do my calculations on every pointermove, I'm probably doing double the work necessary. All the work to compute the rotation/scale/position from the first pointermove in a pair is wasted, as I'll immediately get the second pointermove. Perhaps there are pathological examples where this would matter more (Eg. some many-touch painting app)? If I do my calculations in a RAF callback, then I may be adding an extra frame of latency depending on how the browser schedules pointer events relative to RAF and frame production. Perhaps the spec should define the ordering of RAF and pointer events so it's clear whether this is a good design choice or not? Thanks, Rick
Received on Monday, 9 March 2015 15:13:13 UTC