Re: Schema.org - identifying accessible documents

Hi Matthew,
Thank you for your contribution.

One of the goals of the larger effort is to avoid the division of  
services or processes for people with disabilities and services and  
processes for people without disabilities. If we have segregated or  
special processes for people with disabilities these processes are  
more vulnerable to lack of support. The intention is that anyone may  
have a functional requirement/need/preference (e.g., audio when I  
don't have a display, captions when the kids are screaming, clear  
language in an emergency situation and under stress).

Regarding longevity of the descriptions of people vs. devices (i.e.,  
devices change faster than people), I agree. However, it does not  
follow that this means we need to list all our incapacities or things  
that are "wrong with us" to find a useful resource. The functional  
requirements (or needs and preferences) can be described in a way that  
doesn't tie the preference to a device. For example "I prefer  
information in audio form" is not device dependent, or can be  
satisfied in a variety of ways (including emerging 3D audio devices or  
as yet unanticipated devices/services).  It really doesn't matter from  
a functional perspective whether this is because I am blind, working  
in the dark, in an eyes-busy work situation, have an eye infection, or  
am in an environment where my screen is illegible. Similarly, stating  
that I'm blind or can't see is functionally ambiguous, it does not  
specify whether I prefer Braille, text-to-speech and/or audio  
descriptions. (BTW, if we wanted a way to list our disabilities or the  
capacities we are lacking, I agree the ICF would be a great place to  
find terms).

Jutta



Quoting Matthew Tylee Atkinson <matkinson@paciellogroup.com>:

> This is my first post; good to virtually meet you all.  I'm an   
> accessibility engineer from The Paciello Group.  Previously I was a   
> university researcher on topics involving semi-automated adaptations  
>  for accessibility.
>
> I feel that, as an industry, we should be talking about the   
> capabilities (and therefore access needs) of the _person_, in human   
> terms, rather than the technical characteristics of the _device_ or   
> content.  This is because the gamut of capabilities and resultant   
> accessibility needs of _people_ is going to change extremely slowly   
> over time, whereas _devices_ and content come and go rapidly.  Here   
> are two examples:
>
>  * The minimum font size (in pixels, or even centimetres) that   
> someone requires on one device will not be the same on all devices,   
> due to the form-factor of the device and viewing distance — but if   
> we had at least a rough idea of the user's visual acuity, then this   
> would enable us to get to a reasonable 'out-of-the-box' font size   
> for a user when they come to use a new device to access content.
>
>  * Recording that someone doesn't use a mouse might inform us of   
> required accessibility adaptations on the desktop, but is not   
> portable to thinking about how they might interact with tablets.    
> However, saying something about their dexterity could help us make   
> some judgements about which gestures they may be able/unable to   
> perform on the tablet and thus we could adapt the UI accordingly   
> (e.g. by sacrificing screen space for larger, more easily-usable   
> widgets).
>
> There is an internationally-recognised and ratified document that   
> classifies the way people function that could be used as a solid   
> basis for this.  It’s edited by the WHO and is called the   
> International Classification of Functioning, Disability and Health   
> (ICF) [1].  I also dug out a couple of papers that were written as   
> part of the research project I was involved in, in this area, which   
> may be of interest [2, 3].
>
> It seems to me that a human-capability-based approach addresses   
> concerns raised in this thread, such as concentrating on what people  
>  can (or possibly can't) do, and the issue of who gets to decide on   
> the taxonomy — it's already been done — though mapping human   
> capabilities to specific preferences/settings or adaptations   
> available within apps/OSes/sites/content or GPII-style more   
> technical preferences, would then of course be necessary (but that's  
>  not impossible and feels like a smaller problem IMHO).
>
> With respect to the point about non-linear capabilities, that is a   
> little more tough — it would be great if we could get WHO to produce  
>  a semantic(-web) version of the ICF, though currently that's not   
> occurred.  I'm not sure if the reasoning systems can cope with it,   
> but I have seen a lot of clever stuff done with them so far, so am   
> hopeful.
>
> Regarding the barrier of encouraging developers to mark things up:   
> this is a key question.  We can perhaps glean quite a bit   
> automatically, either: inherently due to structured   
> formats/standards; by using 'sniffing' techniques or perhaps by the   
> authoring tools imparting semantic information automatically (e.g.   
> auto-layout in Xcode/iOS allows UIs to more easily adapt to   
> different screen sizes).
>
> Finally, I think the ideas of context ("when I'm busy", "when I'm   
> tired", "when I'm in a noisy environment" raised earlier in the   
> thread are very important — and it's easy to think this is a huge,   
> possibly intractable problem.  However, if we make our goal (in   
> terms of finding accessible content, or making suitable adaptations   
> to the UI) to be "get to a good-enough solution that the user can   
> then tweak if need be" then I think it's definitely achievable.  We   
> don't need to get the machine to reason about _everything_ — rather,  
>  to solve the problem, we need collaboration between the machine and  
>  the user(s), so it's partly about giving the user the tools they  
> can  use themselves to make further appropriate adaptations (and the  
>  awareness that such tools exist — now _that's_ a complex problem!)
>
> best regards,
>
>
> Matthew
>
> [1] http://www.who.int/classifications/icf/
>
> [2] "The potential of adaptive interfaces as an accessibility aid   
> for older web users", W4A 2010, https://dspace.lboro.ac.uk/2134/6262
>
> [3] "Towards ubiquitous accessibility: capability-based profiles and  
>  adaptations, delivered via the semantic web", W4A 2012,   
> https://dspace.lboro.ac.uk/2134/9789
> --
> Matthew Tylee Atkinson - @matatk
> The Paciello Group
>
>
>

Received on Wednesday, 24 June 2015 15:05:09 UTC