W3C home > Mailing lists > Public > public-webapps@w3.org > October to December 2009

Re: Touch and gestures events

From: João Eiras <joaoe@opera.com>
Date: Mon, 19 Oct 2009 19:56:36 +0200
To: public-webapps@w3.org
Message-ID: <op.u116smj62q99of@id-c0981>

> We seem to come from different angles, and our objective may not the  
> same as
> yours. This is not an official statement, but I could formulate our
> objective like this:
> "How do I enable richer web applications in a touch-aware browser while
> still retaining the possibility to interact with existing (mouse-aware)  
> web
> applications."

We all agree to that, but we disagree how to get there. A touch events API  
would be something completely new that would have any kind of backwards  
compatibility with existing content, meaning, one would need to code for  
the new API and older ones, and would have touch device bias. This means  
duplicate effort for everybody: spec writers, implementors, web authors.

> The most important is the possibility to track multiple touches
> individually, and above I have been trying to communicate the problems in
> just adding that to the existing mouse events.
>
> Second, touch-specific parameters are missing. Such include the pressure  
> and
> bounding box of the touch.
>

But do we need a new API and event model for this ? Can't this be solved  
already in mouse events ? Can't mouse events have the streamId (which  
would reference a pointing device), a pressure attribute and the geometry  
of the pointing device ? Currently, the mouse events API only supports  
single pixel pointing devices, but adding finger support would just  
require for all the coordinate properties of the event object to be means,  
and have the geometry somewhere globally accessible. Again, we don't need  
a completely new API and events model for this.

> Third, an input device independent way to do basic manipulation
> (pan/scale/rotate) to objects. It is well possible to implement just raw
> touch events, and do the gesture recognition in JavaScript, but then the
> actual gestures would follow the web site style, instead of the style
> introduced by operating system, and if your input device doesn't support
> multi-touch, it simply doesn't work on web content, no matter how clever  
> way
> to manipulate the objects you have on your device/OS.

Pan is scrolling for which browsers already fire events. The behavior for  
the scroll event would need to change though, so it would be fired before  
the event, and be cancelable.
Scale is the zooming feature which is also supported in many desktop  
browsers and mobile browsers, but lacks events.
Rotation of the entire viewport is a UI feature, like when you tilt the  
device.

These are all UI events, like focus and blur, and none of them are tied to  
gestures or mouse events. Therefore they should be completely separate  
 from any kind of mouse event feature. Obviously, the scroll and zoom  
events would need a target which would be the element with focus, but  
then, element can gain focus not only by using some pointing device.


-- 

João Eiras
Core Developer, Opera Software ASA, http://www.opera.com/
Received on Monday, 19 October 2009 17:57:11 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 18:49:34 GMT