RE: How do people interact with applications?

May I add that the "read all on page" and "read from current location"
functionality on mobile - offered by both iOS and Android are very simple
and powerful ways to hear all that a mobile page has on the display/page.
Hope this is of value. Alan
On Jun 18, 2015 8:06 AM, "Léonie Watson" <lwatson@paciellogroup.com> wrote:

> > From: chaals@yandex-team.ru [mailto:chaals@yandex-team.ru]
> > Sent: 18 June 2015 11:36
> > Hi,
> >
> > In looking at SVG accessibility, we need to understand how users actually
> > interact with applications - when using a tablet, or a watch with a
> > screenreader, or a headstick, or a keyboard and mouse on a wide screen
> > monitor, what do people actually do.
> >
> > This isn't a new question, so I am really hoping people can point me to
> > literature they consider good, either because it provides a readable and
> clear
> > explanation of well-covered ground such as keyboard users working with
> > text-based documents, or because it describes research in an area that
> is not
> > widely understood such as using sonification and motion on a tablet
> device…
>
> I don't know of a document that directly describes the following (although
> assume they must be out there). Hope it's useful in any case.
>
> With respect to screen readers, two things are important: the gesture set
> changes and different modes of exploration are necessary/possible.
>
> When a screen reader is enabled on a touch device the gesture set changes.
> For example a sighted person will single tap an object to activate it,
> whilst a screen reader user will single tap to identify it (hear its
> name/label announced) and double tap to activate it.
>
> Another example is the flick left/right gesture. By default this moves
> focus to the previous/next screen/view, but when a screen reader is enabled
> it moves focus to the previous/next object on the current screen.
>
> Exploration modes change when a screen reader is enabled. The left/right
> flick gestures are one way of exploring objects on the screen. This
> approach is good for ensuring you discover all objects (or at least those
> that are available to screen readers). It's also possible to move your
> finger around the screen in a more arbitrary way, where objects are
> announced as they're alighted upon. There are slight differences between
> platforms in terms of which modes work best.
>
> When you get to the content level, there are other ways of interacting
> with an application. The method of choosing the type of thing you want to
> explore by (character, word, heading, link etc.) varies depending on the
> platform, but it's possible to make these kinds of choices whenever a
> screen reader is enabled.
>
> For example, iOS has the rotor. You spin two fingers in a circular gesture
> to rotate between the possible forms of navigation. When you arrive at the
> one you want, perhaps navigation by word, you use an up/down flick gesture
> to move to the previous/next word in the currently focused area.
>
>
> Léonie.
>
> --
> Léonie Watson - Senior accessibility engineer
> @LeonieWatson @PacielloGroup PacielloGroup.com
>
>
>
>
>

Received on Thursday, 18 June 2015 12:15:27 UTC