Re: User Contexts: identifying assistive technologies

I mostly agree with Rich and Raman here. We should expose the desired
feature support wanted, not the specific assistive technology running.

If for no other reason, consider this: on most platforms, there's no
standard way that assistive technology registers itself with a name,
version, and type. That information simply isn't available to the user
agent. At best, we could try to detect the presence of known assistive
technology, but that would raise the barrier to entry for new or less
popular assistive technology, and there wouldn't be any reasonable path to
support all of the automation and testing tools that use accessibility
APIs. Even if we proposed new platform APIs, it's not clear what we'd do in
the meantime.

I'm not sure I agree that specifying standards conformance, like ARIA
support, makes sense exactly, but something along those lines. There are a
lot of things an app might want to do to be more accessible that have
nothing to do with ARIA.

Instead, I think the right granularity is more along the lines of
accessibility features that should be supported:
* Text alternatives: the app should provide alt text, aria labels, etc.
* Focus/caret: the app should expose correct on-screen focus rectangles,
because they might be used by a magnifier, highlighter, etc.
* Automation: the app should allow itself to be controlled by assistive
technology - perhaps it should expose extra commands, or not make
assumptions about mouse events, etc.
* Captions
etc.

If the user agent detects some client is using accessibility APIs but it
can't tell what sort of tool is running, it might just set all of these
flags to true. However, in some cases the user agent might be able to
detect that a particular tool only cares about a subset of accessibility
APIs.

- Dominic

On Fri, Jun 21, 2013 at 3:33 PM, James Craig <jcraig@apple.com> wrote:

> Every page request is sent with browser and OS version. This is no
> different, and no one is forcing a a user to expose anything. This will be
> triggered in rare circumstances with a user-configurable settings and
> confirmation dialogs similar to location sharing dialogs.
>
> On Jun 21, 2013, at 3:30 PM, raman@google.com (T.V Raman) wrote:
>
> > 1+.  In general, these interfaces should never expose  details of
> > the runtime to this degree --
> >
> > Richard Schwerdtfeger writes:
> >> We are really stepping on privacy issues be forcing the user to have to
> expose the fact they are using a screen reader to be able to use a site.
> >>
> >> Sent from my iPad
> >>
> >> On Jun 4, 2013, at 1:00 PM, "James Craig" <jcraig@apple.com> wrote:
> >>
> >>> On Jun 4, 2013, at 1:43 AM, Jason White <jason@jasonjgw.net> wrote:
> >>>
> >>>> James Craig <jcraig@apple.com> wrote:
> >>>>> What about adding type tokens, such as "screenreader", "magnifier",
> etc.
> >>>>
> >>>> Excellent. An AT can support more than one function.
> >>>
> >>> On second thought, I don't think that will work. Part of the reason
> for splitting these up into separate WebIDL dictionaries is to support a
> hasFeature() detection method
> >> similar to DOMImplementation.hasFeature().
> >>>
> >>> Just to make sure you're seeing the entire WebIDL blocks, you should
> be viewing the editor's drafts in a JavaScript enabled browser.
> >>>
> >>> So for this existing WebIDL dictionary…
> >>>
> >>> dictionary ScreenReaderSettings {
> >>>    boolean?   screenReaderActive = null;
> >>>    DOMString? screenReaderName = null;
> >>>    DOMString? screenReaderVersion = null;
> >>> };
> >>>
> >>> …an author could do something like this: (Specific syntax TBD)
> >>>
> >>> if (window.settings.hasFeature('ScreenReaderSettings')) {
> >>> var isScreenReaderActive =
> window.settings.valueForKey('screenReaderActive');
> >>> }
> >>>
> >>> I think we'd want to set up a different *feature* altogether for
> 'MagnifierSettings' to standardize properties like zoom level, zoom window
> size, center point, etc.
> >>>
> >>>
> >>
>
>
>

Received on Friday, 21 June 2013 22:53:44 UTC