Architecture for Touch and Sensor events

Repost from WHATWG mailing list (updated).
I would like to sollicit thoughts on the ideal level of abstraction for
event API's for touch and sensors. Ideally The architecture should allow
client applications to be written to an API that abstracts as far as
possible specific device input and sensor configurations. One might envisage
touch emulation by mouse or mouse emulation by touch etc.

The level of API will have a significant impact on the correct functioning
of user interaction scripts across a broad range of devices.

Some random examples to illustrate different levels of API.

Low level: TouchDown(...) onTouchMove() onTouchUp() with each reporting and
tracking pressure and position of multiple fingers so that "gesture
recognition" can be done in the script. For each finger position, pressure
and radius of contact is reported.

Mid Level: onTap(), onDoubleTap(), onLongTouch(), onDrag(),
onPinch()....position but no pressure reported
Configuration for time intervals to recognise Tap, LongTouch etc might be
held in the system information [1] along with the device capabilites.

High Level: onRotate(), onScroll(), onResize()....gesture reported

The complexity and number of problems to overcome to produce a clean API is
much greater for higher levels of abstraction. An example of an API can be
found at [2].

On a related note it would be useful for tablets and smartphones to also
handle other sensors such as accelerometers.... onShake(),
onOrientationChange()
I believe Palm has done some stuff to support this. At a lower level you can
get raw accelerometer data too. [3]

-dave

[1] http://dev.w3.org/2009/dap/system-info/
[2]
http://developer.apple.com/safari/library/documentation/AppleApplications/Reference/SafariWebContent/HandlingEvents/HandlingEvents.html
[3]
http://developer.palm.com/index.php?option=com_content&view=article&id=1554

Received on Thursday, 4 March 2010 09:47:19 UTC