W3C home > Mailing lists > Public > public-pointer-events@w3.org > January to March 2015

Guidance for how best to implement smooth multi-touch behavior

From: Rick Byers <rbyers@chromium.org>
Date: Mon, 9 Mar 2015 11:12:20 -0400
Message-ID: <CAFUtAY-pHczruebfGkYoO3R0ZWBBRcYaur+XC127h5Jgq2jMnw@mail.gmail.com>
To: "public-pointer-events@w3.org" <public-pointer-events@w3.org>
In working on my InputDevice proposal
I was digging into Windows lower-level pointer API and was surprised to
find that it bundles movements of multiple pointers into a single event
(like touch events does, but unlike W3C Pointer events).

This got me thinking about the differences between these two models.  How
would we recommend developers implement smooth multi-touch handling with
the pointer events API?  Eg. say I want to smoothly handle rotation and
zooming by tracking two fingers.  Should I do my calculations on every
pointermove event, or (for example) in a RAF callback each frame?

If I do my calculations on every pointermove, I'm probably doing double the
work necessary.  All the work to compute the rotation/scale/position from
the first pointermove in a pair is wasted, as I'll immediately get the
second pointermove.  Perhaps there are pathological examples where this
would matter more (Eg. some many-touch painting app)?

If I do my calculations in a RAF callback, then I may be adding an extra
frame of latency depending on how the browser schedules pointer events
relative to RAF and frame production.  Perhaps the spec should define the
ordering of RAF and pointer events so it's clear whether this is a good
design choice or not?

Received on Monday, 9 March 2015 15:13:13 UTC

This archive was generated by hypermail 2.3.1 : Monday, 9 March 2015 15:13:14 UTC