W3C home > Mailing lists > Public > public-webevents@w3.org > October to December 2010

Use cases and requirements for touch events

From: <Cathy.Chan@nokia.com>
Date: Thu, 18 Nov 2010 20:00:08 +0100
To: <public-webevents@w3.org>
Message-ID: <5B175DBBD7E76D4EBEF90DEFFD5140D0321A442FFB@NOK-EUMSG-02.mgdnok.nokia.com>
Hi all,

Here comes a first contribution from Nokia on the use cases and requirements for touch events. We'd be interested in any comments and discussions. Thanks.

Regards, Cathy.
---------------
Cathy Chan
Nokia

=================================

Web extensions for touch devices:

Abstract:

Key use cases and requirements for the HTML/JavaScript/DOM Api that enables  the development of applications with direct user interaction on a device with touch screen.

Introduction:

Most websites and applications today use the metaphor of a mouse for user interaction. In a modern mobile device, user interacts with the application via one or more fingers. The interaction paradigm requires application developers to be able to react to multiple touch events.

For touch input, two levels of information may be required:

- Real time tracking of the position of finger(s), the pressure and/or size of the fingertip touching the screen, velocity and direction.

- The action the user wants to perform to an element in the screen, for example rotate, zoom or select

The select action is conventionally performed by intercepting the
onclick() event from the mouse heritage, but there is no standard way the end user can ask the web application to perform an action to a single web element.

Use Cases:

High-level api for element manipulation:

In mobile devices the web browser provides a way for the end user to manipulate the web page, eg. Panning and zooming. However, rich clients often try to layout the view exactly as full screen and control the view
directly:

Use Case #2.1

An image application wants to present several images together in a stack and let the end user to move them around, rotate and zoom them.
Additionally, the resolution of the image is controlled by the rich client, providing initially low-res mages and as the end user zooms them, loads a higher resolution image. Example:
http://scripty2.com/demos/touch

Use Case #2.2

A children puzzle
game provides a stack of puzzle pieces and lets the child to freely move and rotate the pieces to find the fit.

Use Case #2.3

A mapping application provides a map pane that is constructed by tiling data from the map server. The user pans and zooms the web element containing the tiled map with the rest of the screen staying intact.
Zooming the map results first zooming the image and initiating a fetch from the server for higher resolution map tiles. Once the tiles are received the image is replaced. Panning the map results in drawing adjacent tiles that had already been cached and initiating a fetch from the server for additional caching.

Known nonfunctional requirements

- The action user performs to a screen element (zoom, pan, rotate,
select) can be implemented differently by different vendors. To mitigate this, the high level API needs to be agnostic to the physical action and to be tied to the action the end user wants to perform

- The tactile feedback resulting to a user touching the screen SHOULD be fired within 30ms, MUST be fired always with the same delay and MUST be fired with the same delay as the system response
Received on Thursday, 18 November 2010 21:47:40 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 18 November 2010 21:47:42 GMT