Re: Touch and gestures events

> Hi,
>
> I suppose that the interest Olli mentioned was ours (Nokia).  
> Unfortunately there was a long delay before we were able to participate  
> the discussion and release our implementation, but yes, we have  
> previously discussed touch events with Olli. We would be interested in  
> participating standardization of touch and gesture events.

Hi.

My personal opinion is that such API is an extremely bad idea.

First, it's semantically biased towards devices with a touch input device,  
therefore not applicable to other devices with many mice or joystick  
peripherals. Differentiating too much between different input devices has  
shown that it's very bad for cross device compatibility and accessibility.  
Look for instance what happens if you have a button with an onclick event  
handler and use the keyboard instead to press it, or if you have a  
textarea with a keypress event handler and use an IME.

Second, it's a reinvention of the wheel. Most of the stuff covered in such  
API is already available in the mouse events model. touchdown, touchmove,  
touchup, touchover, touchout are just duplications of the corresponding  
mouse events.

Thirdly, to solve the gaps the current mouse events API has, we can easily  
extend it while remaining backwards compatible. The MouseEvent object can  
be overloaded with the extra properties that would be found on touch  
events, like streamId, pressure and so on. As a side note, the API lacks  
an event for variations in pressure while the finger does not move.

Forth, someone hinted at the possible violation of a patent. Regardless of  
it being applicable or not, it might be necessary to workaround it.

Fifth, gesture themselves are not touch, or mouse events. Gestures are  
complex input events, comparable to what you get with keyboard shortcuts  
on a keyboard. In a keyboard, you can press any set of keys consecutively,  
or sequentially to trigger a keyboard shortcut. With gesture events one  
would move the pointing device, being it a mouse, finger, or whatever,  
following a specific path, like a line from left to right, a circle, or  
the letter M. Therefore trying to specify something that has infinite  
amounts of combinations is an extreme undertaking. Eventually, gestures  
will most likely be implemented using libraries anyway. The API first  
needs to solve the low level matters, which are singular events for  
multiple pointing devices.

Sixth, the tap on two spots without intermediate mousemoves is not an  
issue. This already happens on desktop computers if you tab back and forth  
to a webpage and in between you change the mouse position. Also, taping in  
two spots, can just be considered as a mousemove with a bigger delta. This  
limitation of touch input devices, is something that needs to be solved on  
the implementation level in a way that can be mapped to the mouse events  
API. The problems touch enable devices face currently, is not about a lack  
of an API that detects the finger moving around, but about webpages that  
are biased towards mouse users and expect mousemove/over/out events to  
happen, which means they lack much of the accessibility they should have  
for users with other input devices, like a keyboard. If they relied on  
mousedown/up alone, or click, they would be much more foul-proof.

To conclude: we should be focusing on extending the mouse events API to  
provide the information that is lacking currently. This puts much less  
burden on spec writers, implementors, and web developers.

Cheers.

PS: this is my personal opinion, not an official Opera position.

-- 

João Eiras
Core Developer, Opera Software ASA, http://www.opera.com/

Received on Thursday, 15 October 2009 18:04:37 UTC