W3C home > Mailing lists > Public > www-dom@w3.org > January to March 2010

Architecture for Touch and Sensor events

From: dave penkler <dpenkler@gmail.com>
Date: Wed, 3 Mar 2010 17:18:53 +0100
Message-ID: <c5402fb41003030818h5f71b3b4h600267fe31a09e82@mail.gmail.com>
To: www-dom@w3.org
Repost from WHATWG mailing list (updated).
I would like to sollicit thoughts on the ideal level of abstraction for
event API's for touch and sensors. Ideally The architecture should allow
client applications to be written to an API that abstracts as far as
possible specific device input and sensor configurations. One might envisage
touch emulation by mouse or mouse emulation by touch etc.

The level of API will have a significant impact on the correct functioning
of user interaction scripts across a broad range of devices.

Some random examples to illustrate different levels of API.

Low level: TouchDown(...) onTouchMove() onTouchUp() with each reporting and
tracking pressure and position of multiple fingers so that "gesture
recognition" can be done in the script. For each finger position, pressure
and radius of contact is reported.

Mid Level: onTap(), onDoubleTap(), onLongTouch(), onDrag(),
onPinch()....position but no pressure reported
Configuration for time intervals to recognise Tap, LongTouch etc might be
held in the system information [1] along with the device capabilites.

High Level: onRotate(), onScroll(), onResize()....gesture reported

The complexity and number of problems to overcome to produce a clean API is
much greater for higher levels of abstraction. An example of an API can be
found at [2].

On a related note it would be useful for tablets and smartphones to also
handle other sensors such as accelerometers.... onShake(),
I believe Palm has done some stuff to support this. At a lower level you can
get raw accelerometer data too. [3]


[1] http://dev.w3.org/2009/dap/system-info/
Received on Thursday, 4 March 2010 09:47:19 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:36:56 UTC