Re: User Contexts: identifying assistive technologies

Andy Heath <andyheath@axelrod.plus.com> wrote:
 
> If User Contexts *only* had applicability to "disabled" people
> (whatever that means) then this would be a valid argument but they
> don't.  An indication that a user requires textForVisual could be
> because they are in a noisy environment rather than because they are
> deaf.  A request for auditoryForVisual could be because they are
> driving, and so on.  The consequence is that that adaptations to
> those preferences can improve the experience for many more people -
> as we all know is the case with many "accessibiliy" adaptations.
> But we lose the capability to associate the adaptation *only* with
> those specific human characteristics, which is a good thing
> (provided we can still deliver appropriate adaptations, which we
> can).

With screen readers now ubiquitous in several mobile operating systems (indeed
provided by default and requiring only to be switched on), the use of one
doesn't necessarily imply any particular human characteristic either. In fact,
drivers who insist on using mobile devices on the road should be encouraged to
turn their screen reader on temporarily.

What really suggests a disability, though, is an expensive proprietary screen
reader running on certain desktop operating systems, which only a person with
a disability who truly needed it would be likely to own.

More broadly, as we attempt to predict the future, I expect those in
challenging environments who need accessibility to rely on the same in-built
tools (perhaps with a different configuration) as people with disabilities. If
that's the case, then I'm not sure how much of a give-away the proposed
properties turn out to be: at the moment, they're highly indicative, but in a
world of seamlessly accessible mobile devices that we expect to emerge during
the time-frame of the User Contexts deployment, perhaps less so.

Received on Thursday, 6 June 2013 08:10:43 UTC