Re: Touch and gestures events

On Fri, Oct 16, 2009 at 10:46 AM,  <kari.hiitola@nokia.com> wrote:
>
> On 10/15/09 21:04, "ext Joćo Eiras" <joaoe@opera.com> wrote:
>
>>
>>
>>> Hi,
>>>
>>> I suppose that the interest Olli mentioned was ours (Nokia).
>>> Unfortunately there was a long delay before we were able to participate
>>> the discussion and release our implementation, but yes, we have
>>> previously discussed touch events with Olli. We would be interested in
>>> participating standardization of touch and gesture events.
>>
>> Hi.
>>
>> My personal opinion is that such API is an extremely bad idea.
>>

I agree.

>From the standpoint of web compatibility, and an objective: "how do I
get my document to work compatibly, in as many browsers as possible",
I have noticed that the different API causes considerably more work
than what seems necessary.

>> First, it's semantically biased towards devices with a touch input device,
>> therefore not applicable to other devices with many mice or joystick
>> peripherals. Differentiating too much between different input devices has
>> shown that it's very bad for cross device compatibility and accessibility.
>> Look for instance what happens if you have a button with an onclick event
>> handler and use the keyboard instead to press it, or if you have a
>> textarea with a keypress event handler and use an IME.
>
> I think that the mistake has been made in the past to make the mouse events
> biased towards, well, mouse. I'm all for the idea that that everything that
> has a semantic meaning, click, context menu, etc. should have separate event
> types, which can be produced by any means, be it joystick, mouse or touch. I
> like to think that Manipulate events are in the same continuum, as they have
> the mindset of what the user wants to do regardless of the medium used for
> expressing that will.
>

Which input device is "contextmenu" related to?

>> Second, it's a reinvention of the wheel. Most of the stuff covered in such
>> API is already available in the mouse events model. touchdown, touchmove,
>> touchup, touchover, touchout are just duplications of the corresponding
>> mouse events.
>
> This comment actually made me think that we should maybe decouple the touch
> from event types, and make them pointerxxx instead in cases where you don't
> specifically need the information if it was a touch event, or if it was
> coming from a tablet.
>

That would work in exactly 0 browsers, and, since it doesn't work,
nobody would use it.

>> Thirdly, to solve the gaps the current mouse events API has, we can easily
>> extend it while remaining backwards compatible. The MouseEvent object can
>> be overloaded with the extra properties that would be found on touch
>> events, like streamId, pressure and so on. As a side note, the API lacks
>> an event for variations in pressure while the finger does not move.
>
> Pressure and bounding box are things that could be easily added, but I think
> that adding stream id would break backwards compatibility too badly. The
> events would jump wildly between the touch points if someone chooses to put
> multiple fingers on the screen. Accidental touching of the screen or some
> ghost event would stop anything user is doing, e.g. drag, as the second
> touch causes also a mouseup event.
>

Please provide a reduced testcase, mentioning browser, version, and platform.

We saw just a few days ago a false claim by a major contributor to the
w3c being shown false, and it appears that this is the case again.

In working with simulated events over the last few years, I have never
witnessed the behavior you are describing.

To replicate the scenario of two mousedown, I dispatched a two
simulated events on a target. I tested in the following devices:

Blackberry9500 Simulator, Palm Pre Emulator 1.2, on VirtualBox 3.3,
Chrome 2, Safari 4, Firefox 3.5, Spidermonkey. The results were the
same in every browser.

Results:
-
mousedown
mousedown

Example:
<!doctype html>
<html lang="en">
<head>
<title>test mouseup</title>
</head>
<body>
<h1>Mousedown Twice Does Not Trigger Mouseup</h1>
<div id="testNode1">SSS</div>
<div id="testNode2">SSS</div>
<pre id="m">-</pre>
<script type="text/javascript">
onload = function(ev) {
    simulateEvent(document.getElementById("testNode1"), "mousedown");
    simulateEvent(document.getElementById("testNode2"), "mousedown");
};


document.onmouseup = dUp;
document.onmousedown = dDown;
var m = document.getElementById("m");
function dUp(ev) {
  m.firstChild.data += "\n" + ev.type;
}

function dDown(ev) {
  m.firstChild.data += "\n" + ev.type;
}

function simulateEvent(target, type) {
  var ev = document.createEvent("MouseEvents");
  ev.initMouseEvent(type, true, true, document.defaultView,
    0, 0, 0, 0, 0, false, false, false, false, 0, null);
  target.dispatchEvent(ev);
}
</script>
</body>
</html>

>> Forth, someone hinted at the possible violation of a patent. Regardless of
>> it being applicable or not, it might be necessary to workaround it.
>>

I didn't see any hinting.

I actually learned about the patent while researching to try and
figure out how to simulate touch events, so that I could unit test my
code.

Apple put the documentation as a lower priority than patenting, and so
a google search result for - createTouchList javascript - brings up
the link Olli posted (thank you for posting that).

http://www.google.com/search?q=createTouchList+JavaScript

Thanks for the documentation, Apple!

:-D

>> Fifth, gesture themselves are not touch, or mouse events. Gestures are
>> complex input events, comparable to what you get with keyboard shortcuts
>> on a keyboard. In a keyboard, you can press any set of keys consecutively,
>> or sequentially to trigger a keyboard shortcut. With gesture events one
>> would move the pointing device, being it a mouse, finger, or whatever,
>> following a specific path, like a line from left to right, a circle, or
>> the letter M. Therefore trying to specify something that has infinite
>> amounts of combinations is an extreme undertaking. Eventually, gestures
>> will most likely be implemented using libraries anyway. The API first
>> needs to solve the low level matters, which are singular events for
>> multiple pointing devices.
>
> Your views actually seem to me in line with my point about the naming the
> event including pan/zoom/rotate as Manipulate instead of Gesture. If there
> is later will to introduce more complex gestures, there would still be an
> appropriate name available for that.
>
>> Sixth, the tap on two spots without intermediate mousemoves is not an
>> issue. This already happens on desktop computers if you tab back and forth
>> to a webpage and in between you change the mouse position. Also, taping in
>> two spots, can just be considered as a mousemove with a bigger delta. This
>> limitation of touch input devices, is something that needs to be solved on
>> the implementation level in a way that can be mapped to the mouse events
>> API. The problems touch enable devices face currently, is not about a lack
>> of an API that detects the finger moving around, but about webpages that
>> are biased towards mouse users and expect mousemove/over/out events to
>> happen, which means they lack much of the accessibility they should have
>> for users with other input devices, like a keyboard. If they relied on
>> mousedown/up alone, or click, they would be much more foul-proof.
>
> Especially tapping/clicking is not not an issue, as only one event is
> produced. And there I don't think there is any point in changing anything.
> Tapping the touch screen would produce a click.
>
> I think that it would be wise for browsers to work in a compatibility mode,
> where e.g. the movement of the first finger causes also mouse events. Our
> trivial implementation was that mouse events stop when the second finger
> touches the screen, but we noticed that it's not ideal, since you very often
> accidentally touch the screen with another finger, and there are a lot of
> ghost events with current multi-touch computers.
>

I do not understand this proposal.

What is lacking from mouse events?

I have used Apple's Touch API for:
 * retrofitting existing drag 'n drop code (APE)
 * modifying existing event simulation framework code (YUI2)

The APIs for simulation and for handling event callback are both
complicated. Simulation includes 18 parameter variables, plus all of
the key modifiers, like ctrlKey, metaKey (how the user is supposed do
that off is beyond me). It includes a pageX (nonstandard). It includes
screenX (can't explain the rationale for that one). It could easily
work in a compatible way so that the event properties of the actual
event would "work" for most touch events.

I am aware that Apple has rotation and scaling. Must these necessarily
be coupled to the input device ("touch")? Could the rotation/scale be
independent of that (much like a contextmenu event is independent of
the input device or method that that device triggers it (recall old
mac IE, where click-hold caused a context menu to appear).

Is it possible to, as RIM does, map mouse events to touch events?

If so, I would like to propose that if finger movements are mapped to
scroll the document, that this is mapped to the default action for a
"mousemove" event. For example:-

mousemoveHandler(ev) {
  // prevent viewport scroll for touch-screen devices.
  ev.preventDefault();
}

Garrett

Received on Friday, 16 October 2009 22:10:12 UTC