Re: Collecting requirements for User Contexts

Sorry, I'm a little late responding to this thread (pressure of other 
work) but will work through the mails and points in order now.

In order to comment I've pasted the draft in as reply text, comments 
edited in below.

> == Mechanism ==
> A user agent creates and maintains a profile of the user's needs and
> preferences. This profile can be interrogated by Web content using a
> specified API, subject to constraints intended to preserve the user's
> privacy (see requirements below).
>
> The needs and preferences are represented in the API as key/value pairs.
>
> The user agent populates the profile by obtaining information from
> one or more sources. No restriction is placed on the sources that may
> be used, but depending on the implementation these may include:
> * The configuration of the user agent.
> * The configuration of the operating system.
> * Assistive technologies (via available API calls).

I'm not sure about this but it may be because I am not interpreting your 
words in the way you intend.  My understanding is its not a good idea to 
identify the assistive technologies people use in preferences.  Reasons 
for this as I understand them (happy for the errors of my understandings 
to be pointed out) are:

1. In desktops this is well sorted out with the accessibility API's, its 
only mobiles and the current proliferation of new hybrid devices where 
those API's don't exist or don't exist in the same public way.  What 
kinds of devices does our scope cover ?  Is it desktops ?

2. There are dangers in breaching privacy here - whatever privacy 
solution we adopt there is still scope for that information about a user 
to be leaked.

3. In the proposal that Rich and I put in this wasn't an issue - the 
point of the substitution properties "this for that" - textForVisual, 
auditoryForText etc. are that assistive technologies can be beneath that 
level - its between the OS and the user how the eventual modality is 
delivered.  What does it mean to talk of a screen-reader on a mobile 
phone ?  Does it mean text is rendered in auditory ?  Surely it depends 
what is available on the device and in the API - will external agencies 
be providing screen-reader technology to mobile devices ? (I have the 
feeling they won't be).  By using an abstract representation of the 
preference in terms of modalities we also hide the reason the adaptation 
is required and extend use cases to much broader scenarios.  A user 
might have a preference for captions (which I'm considering as a 
modality or media aspect not as an AT) but that might be because they 
are in a noisy environment, not because they are deaf.  A user might 
want auditoryForVisual or textForVisual and auditoryForText which could 
be the equivalent of the classic screenreader use case on a desktop but 
it could be because they are driving or for any number of other reasons 
that they aren't able to consume visual at that time.  It might be an 
auditory warning to a pilot using an ipad containing flight manuals in 
an ebook (not yet but soon).  Using an abstract data model gives a great 
deal more power to the device OS to make decisions its best placed to 
make (because it knows about what the device can do) AND in extending 
use cases to more general scenarios.

I may have misunderstood what you meant here, my apologies if I did but 
for me its very important that the "this modality for that one" is part 
of this because its the fundamental idea.

> * The location and physical environment of the hardware, via sensors,
> positioning technology and other mechanisms.
> * Need/preference profiles retrieved from the Web in any format that
> the user agent supports (GPII, for example).

Yep. GPII is in my personal view too big right now and not ready, but in 
the future.
> * Inferences drawn from any of the above.
>
> [Editorial note: should there be a hierarchy of sources, e.g., the
> user's explicit preferences override information gathered from the
> environment, or should this be left completely unspecified? It is
> undecided whether needs/preferences not readily available from the
> first two items above (and possibly also the assistive technology
> item) should be excluded from the first version of the specification.]

This is interesting and difficult imho.  Different devices/vendors might 
want to implement different internal algorithms (as unique selling 
points) but I don't think either takes precedence - more a case of 
matching the two together.  So a particular delivery presentation might 
be what is best for a particular context with a particular preference 
(consider contrast/font-size/brightness with a phone walking along in 
changing light conditions - there are different ways a device might 
optimise its delivery).  This *is* new ground - but the expectation 
should be in my view that we model environmental and preferences in the 
API/data model in a way that a device can both choose and report what 
adapations it makes.  A web app would need to know what environmental 
information is available, what preference settings are known and what 
has already been done. A device might decide not to report some of those 
things it considers it has already taken account of - this needs verbal 
discussion to carefully capture everything.

> The user agent may update the profile in response to changes in any of
> the information sources that it supports.
>
> [Editorial note: should there be an event to notify Web content of
> changes that occur in the profile?]

Yes. Very important that there is.

>
> == Access Control ==
> There is a basic set of keys in the need/preference profile that may
> be queried by all Web content using the API. The remaining keys may
> not be queried unless the user grants permission to do so. The user
> agent provides in its user interface a mechanism whereby the user can
> grant or withhold permission. (This is a user agent conformance
> requirement.) Once granted or denied, the permission applies to the
> browsing context as defined in HTML 5.
>
> Grants and denials of permission may be maintained in persistent
> storage by the user agent and retrieved in subsequent interactions.
>
> [Editorial note: It is undecided at what level of granularity the
> permission is granted or denied, e.g., for individual keys, for
> defined categories of keys or for all keys.]
>
> == Extension Mechanism ==
> The API allows values to be retrieved for keys not defined in the
> specification. Such implementation-defined keys are distinguished (for
> example, by a namespace mechanism) from keys defined in the
> specification.
>
> [Editorial note: the extension mechanism is distinct from the question
> of whether the specification itself should define a core of common
> keys and one or more modules that not every implementation is required
> to support.]
>
> == Specifiable Compoennts of a User's Profile ==
> This section identifies contextual items (needs, preferences
> etc.), which have been proposed for inclusion in the specification.
>
> [Editorial note: use cases need to be added to the subsections below.]
>
> === General ===
> * The user's current point-of-regard. [Note: this isn't really a need
> or preference but it is part of the "context".]
> * Whether the user's keyboard settings allow all interactive elements
> to receive focus.
> * Whether the display colors are currently inverted by the operating
> system or user agent.
>
> === Type Settings ===
> * The user's default font size.
> * The user's minimum font size limit.
> * The user's preferred letter spacing.
> * The user's preferred line height.
>
> === Display Settings ===
> * The user's required display contrast.
> * Whether the user's display requires grayscale or supports full
> color.
> * Whether a lightly colored foreground text on a dark background, or
> dark text on a light background, is preferred.
>
> === Media Alternative Settings ===
> * Whether captions/subtitles should be presented.
> * Which languages are preferred for captions/subtitles (giving an order of preference).
> * Whether captions/subtitles for the deaf and hard of hearing, or
> spoekn-language subtitles only, should be provided.
> * Whether closed captions should be used. [Editorial note: this item
> may be redundant.]
> * Whether a text transcript of audio or video is preferred.
> * Whether audio or video media  should be presented simultaneously with the transcript (implies that a transcript is required).
> * Whether a video of sign language (i.e., a sign language translation)
> is desired.
> * Which sign languages are preferred (in order of preference).
> * Whether an audio description of video is desired.
> * Whether visual resources should be replaced or augmented by textual alternatives
> (e.g., images by descriptions).
> * Whether visual resources should be replaced or augmented by long descriptions.
> * Whether replacement or augmentation is preferred, i.e., simultaneous presentation of the visual content and the description, or substitution of the description for the visual material.
> * Whether auditory resources should be replaced by visual alternatives
> (e.g., sounds by visual notifications).
> * Whether text, visual content, or both should be replaced by spoken
> content (e.g., recorded or synthetic speech delivered as audio).
> * Whether synthetic speech or human speech is preferred.
> * Whether speech should always commence at the beginning of a
> recording or from the point at which it was last interrupted.
> * Whether spoken alternatives should be substituted only for directive
> content [Editorial note: definition required.]
> * Whether tactile content should be augmented or replaced by textual alternatives, and whether augmentation or replacement is preferred.
> * Whether tactile content should be augmented or replaced by visual content, and whether to augment or replace.
> * Whether tactile content should be augmented or replaced by auditory content, and whether to augment or replace.
> * Whether auditory content should be replaced or augmented by tactile content, and whether to augment or replace.
> * Whether visual content should be augmented or replaced by tactile content, and whether to augment or replace.
> * Whether visual content that flashes more than three times per second should be suppressed.
> * Whether content that simulates motion should be suppressed.
> * Whether sounds that can cause seizures should be suppressed.
>
> === Presentational Modality Settings ===
> * Whether textual content should be augmented or replaced by visual content, and whether augmentation or replacement is preferred.
> * Whether textual content should be augmented or replaced by audio content, and whether to augment or replace.
> * Whether textual content should be augmented or replaced by tactile content, and whether to augment or replace.
>
> === Screen Reader and Assistive Technology Settings ===
> * Whether a screen reader is active.
> * The name of the active screen reader.
> * The version of the active screen reader.
> * Whether content is required to be compatible with screen readers and
> other assistive technologies.
> [Editorial note: this can be inferred from the above, but perhaps may
> be subject to a separate privacy setting?]
> * With which accessibility APIs or versions thereof the content is
> required to be compatible (e.g., a Web or operating system-specific APIs).
>
> [Editorial note: the desirability of some of the requirements in this
> section has been questioned.]
>
> === User Interface Organization and Complexity Settings ===
> * Whether a simple user interface is required.
> * Whether the number of user interface elements presented
> simultaneously should be limited. [Note: this is one dimension of
> simplification; further definition is needed and see other dimensions below.]
> * Whether the text included in the content should use simple
> language/be suitable for a given reading level.
> * Whether the options and functionality available to the user should
> be restricted to those essential to the primary purpose of the interaction.
> * Whether symbols (in symbol systems used by persons with cognitive
> disabilities) should be substituted for text.

I need to check this for completeness but it seems to capture my earlier 
argument but I will leave that there because it may help us sing from 
the same hymn-sheet.  The only serious issue here for me is how far we 
support modelling of specific AT in preferences (such as screenreaders) 
tied in with what kind of devices are in our scope.

andy




andy
andyheath@axelrod.plus.com
-- 
__________________
Andy Heath
http://axelafa.com

Received on Tuesday, 4 June 2013 09:40:08 UTC