Re: Touch and gestures events

On 10/15/09 21:04, "ext Joćo Eiras" <joaoe@opera.com> wrote:

> 
> 
>> Hi,
>> 
>> I suppose that the interest Olli mentioned was ours (Nokia).
>> Unfortunately there was a long delay before we were able to participate
>> the discussion and release our implementation, but yes, we have
>> previously discussed touch events with Olli. We would be interested in
>> participating standardization of touch and gesture events.
> 
> Hi.
> 
> My personal opinion is that such API is an extremely bad idea.
> 
> First, it's semantically biased towards devices with a touch input device,
> therefore not applicable to other devices with many mice or joystick
> peripherals. Differentiating too much between different input devices has
> shown that it's very bad for cross device compatibility and accessibility.
> Look for instance what happens if you have a button with an onclick event
> handler and use the keyboard instead to press it, or if you have a
> textarea with a keypress event handler and use an IME.

I think that the mistake has been made in the past to make the mouse events
biased towards, well, mouse. I'm all for the idea that that everything that
has a semantic meaning, click, context menu, etc. should have separate event
types, which can be produced by any means, be it joystick, mouse or touch. I
like to think that Manipulate events are in the same continuum, as they have
the mindset of what the user wants to do regardless of the medium used for
expressing that will.

> Second, it's a reinvention of the wheel. Most of the stuff covered in such
> API is already available in the mouse events model. touchdown, touchmove,
> touchup, touchover, touchout are just duplications of the corresponding
> mouse events.

This comment actually made me think that we should maybe decouple the touch
from event types, and make them pointerxxx instead in cases where you don't
specifically need the information if it was a touch event, or if it was
coming from a tablet.

> Thirdly, to solve the gaps the current mouse events API has, we can easily
> extend it while remaining backwards compatible. The MouseEvent object can
> be overloaded with the extra properties that would be found on touch
> events, like streamId, pressure and so on. As a side note, the API lacks
> an event for variations in pressure while the finger does not move.

Pressure and bounding box are things that could be easily added, but I think
that adding stream id would break backwards compatibility too badly. The
events would jump wildly between the touch points if someone chooses to put
multiple fingers on the screen. Accidental touching of the screen or some
ghost event would stop anything user is doing, e.g. drag, as the second
touch causes also a mouseup event.

> Forth, someone hinted at the possible violation of a patent. Regardless of
> it being applicable or not, it might be necessary to workaround it.
> 
> Fifth, gesture themselves are not touch, or mouse events. Gestures are
> complex input events, comparable to what you get with keyboard shortcuts
> on a keyboard. In a keyboard, you can press any set of keys consecutively,
> or sequentially to trigger a keyboard shortcut. With gesture events one
> would move the pointing device, being it a mouse, finger, or whatever,
> following a specific path, like a line from left to right, a circle, or
> the letter M. Therefore trying to specify something that has infinite
> amounts of combinations is an extreme undertaking. Eventually, gestures
> will most likely be implemented using libraries anyway. The API first
> needs to solve the low level matters, which are singular events for
> multiple pointing devices.

Your views actually seem to me in line with my point about the naming the
event including pan/zoom/rotate as Manipulate instead of Gesture. If there
is later will to introduce more complex gestures, there would still be an
appropriate name available for that.

> Sixth, the tap on two spots without intermediate mousemoves is not an
> issue. This already happens on desktop computers if you tab back and forth
> to a webpage and in between you change the mouse position. Also, taping in
> two spots, can just be considered as a mousemove with a bigger delta. This
> limitation of touch input devices, is something that needs to be solved on
> the implementation level in a way that can be mapped to the mouse events
> API. The problems touch enable devices face currently, is not about a lack
> of an API that detects the finger moving around, but about webpages that
> are biased towards mouse users and expect mousemove/over/out events to
> happen, which means they lack much of the accessibility they should have
> for users with other input devices, like a keyboard. If they relied on
> mousedown/up alone, or click, they would be much more foul-proof.

Especially tapping/clicking is not not an issue, as only one event is
produced. And there I don't think there is any point in changing anything.
Tapping the touch screen would produce a click.

I think that it would be wise for browsers to work in a compatibility mode,
where e.g. the movement of the first finger causes also mouse events. Our
trivial implementation was that mouse events stop when the second finger
touches the screen, but we noticed that it's not ideal, since you very often
accidentally touch the screen with another finger, and there are a lot of
ghost events with current multi-touch computers.

> To conclude: we should be focusing on extending the mouse events API to
> provide the information that is lacking currently. This puts much less
> burden on spec writers, implementors, and web developers.

Thanks for the input. Naturally, I'm not so easily convinced :), and would
like to hear comments from others as well.

 Best regards,

 - Kari Hiitola




> Cheers.
> 
> PS: this is my personal opinion, not an official Opera position.
> 
> --
> 
> Joćo Eiras
> Core Developer, Opera Software ASA, http://www.opera.com/
> 
> 

Received on Friday, 16 October 2009 17:47:02 UTC