Re: Touch and gestures events

On 10/19/09 20:56, "ext Joćo Eiras" <> wrote:

>> We seem to come from different angles, and our objective may not the
>> same as
>> yours. This is not an official statement, but I could formulate our
>> objective like this:
>> "How do I enable richer web applications in a touch-aware browser while
>> still retaining the possibility to interact with existing (mouse-aware)
>> web
>> applications."
> We all agree to that, but we disagree how to get there. A touch events API
> would be something completely new that would have any kind of backwards
> compatibility with existing content, meaning, one would need to code for
> the new API and older ones, and would have touch device bias. This means
> duplicate effort for everybody: spec writers, implementors, web authors.

I got the feeling that with you we are in disagreement only about the way to
get there, but I thought that Garrett's and my objectives were actually
quite different.

>> The most important is the possibility to track multiple touches
>> individually, and above I have been trying to communicate the problems in
>> just adding that to the existing mouse events.
>> Second, touch-specific parameters are missing. Such include the pressure
>> and
>> bounding box of the touch.
> But do we need a new API and event model for this ? Can't this be solved
> already in mouse events ? Can't mouse events have the streamId (which
> would reference a pointing device), a pressure attribute and the geometry
> of the pointing device ? Currently, the mouse events API only supports
> single pixel pointing devices, but adding finger support would just
> require for all the coordinate properties of the event object to be means,
> and have the geometry somewhere globally accessible. Again, we don't need
> a completely new API and events model for this.

In my previous e-mail I tried to highlight the problems caused to the
existing web applications if we just add the stream ID to mouse events. I'd
like to hear your opinion also if I'm completely mistaken with my worries.

>> Third, an input device independent way to do basic manipulation
>> (pan/scale/rotate) to objects. It is well possible to implement just raw
>> touch events, and do the gesture recognition in JavaScript, but then the
>> actual gestures would follow the web site style, instead of the style
>> introduced by operating system, and if your input device doesn't support
>> multi-touch, it simply doesn't work on web content, no matter how clever
>> way
>> to manipulate the objects you have on your device/OS.
> Pan is scrolling for which browsers already fire events. The behavior for
> the scroll event would need to change though, so it would be fired before
> the event, and be cancelable.
> Scale is the zooming feature which is also supported in many desktop
> browsers and mobile browsers, but lacks events.
> Rotation of the entire viewport is a UI feature, like when you tilt the
> device.

Our manipulate event has default handlers which do zooming and scrolling.
Rotation is ignored at least for now. You can also prevent the default
behavior rather easily.

> These are all UI events, like focus and blur, and none of them are tied to
> gestures or mouse events. Therefore they should be completely separate
>  from any kind of mouse event feature. Obviously, the scroll and zoom
> events would need a target which would be the element with focus, but
> then, element can gain focus not only by using some pointing device.

Sorry for sounding like a broken record, and trying to sell our manipulation
event too much. But it's not tied to mouse events, or any specific gestures.
But I think that the intent of pan/scale/rotate is quite universal and maps
well to scroll and scale of the page as well, at least on touch and tablet
 - Kari Hiitola

> --
> Joćo Eiras
> Core Developer, Opera Software ASA,

Received on Tuesday, 20 October 2009 12:44:23 UTC