W3C home > Mailing lists > Public > public-rqtf@w3.org > October 2019

RE: XR Accessibility Primer Draft

From: White, Jason J <jjwhite@ets.org>
Date: Tue, 8 Oct 2019 13:24:26 +0000
To: Joshue O Connor <joconnor@w3.org>, RQTF <public-rqtf@w3.org>, Matthew Tylee Atkinson <matkinson@paciellogroup.com>
Message-ID: <BN7PR07MB4849C042A6F590000A822EFAAB9A0@BN7PR07MB4849.namprd07.prod.outlook.com>
Here are my comments, to which Josh referred (excluding the comments that Josh has indicated have already been addressed).

In discussing accessibility APIs, it would be useful to clarify that they are implemented at the operating system level, and each is specific to its own platform. Giving more cross-platform examples would be useful - both examples are for Microsoft operating systems, as the draft is currently written.

I would also suggest clarifying that not all accessibility considerations are addressed by accessibility APIs, and indicating the assistive technologies in which they are deployed (e.g., screen readers, speech input, etc.).

There is an architectural issue regarding to what extent existing assistive technologies and their underlying APIs should be extended to include XR environments, and to what extent the necessary accessibility features should be implemented directly in the libraries and components used to build XR applications. You seem to be alluding to this in mentioning braille and haptic devices, but the point isn't developed directly.

Canvas example: this is rather unclear. I would suggest adding several lines of code and explanatory text to show the application of a hit region to a path, so that it is clear how this works to those who haven't encountered it before. For instance, draw a square or a circle, and define a hit region; then add an ARIA role. I would also appreciate further discussion of the advantages/shortcomings of extending this approach to 3D graphics.

In discussing the relationship between WebGL and the WebXR API: clarify that WebXR uses WebGL as its rendering mechanism, but that alternative mechanisms may be supported in the future (i.e., this is an intentional point of extensibility).

In describing the rendering support, perhaps a sentence or two introducing the hardware possibilities would be useful. What range of hardware is WebXR designed for?

Also, the WebXR support for input devices (e.g., via the Gamepad API) should be better explained. Perhaps an overview section describing the hardware (including input and rendering) would be valuable prior to entering into the details. Currently, the input section is located uneasily within the scene graph discussion (i.e., between "scene graphs" and "semantic scene graphs"). Some of the issues related to support for various input devices should be discussed, or a reference should be provided to a suitable treatment of the topic elsewhere.

An example of what the nodes and edges of a scene graph - or of a semantic scene graph - would represent would be useful.

In discussing the Accessibility Object Model: as I understand it, AOM is currently confined to ARIA roles, states, and properties, which may not be sufficiently expressive - even in ARIA 1.2 - to convey the structure and relationships inherent in a 3D XR scene adequately to create a quality interaction for the AT user. This issue merits further analysis.



-----Original Message-----
From: Joshue O Connor <joconnor@w3.org>
Sent: Tuesday, October 8, 2019 5:51 AM
To: RQTF <public-rqtf@w3.org>; Matthew Tylee Atkinson <matkinson@paciellogroup.com>
Subject: XR Accessibility Primer Draft

Hi all,

At TPAC we had a very interesting discussion in APA about how 3D, and Immersive Web content is generated. The various processes were discussed as well as the tech involved and their roles various parts of the stack.
Special thanks are due to Nell Waliczek (Amazon) who took the time to walk the group through the rendering process and she facilitated a very useful talk. I've made an effort to capture the gist of that discussion.

I've also tweaked its purpose a little in that it strikes me that a doc that explains some accessibility fundamentals to Immersive Web type folks, as well as Immersive Web fundamentals to accessibility folks would be useful, so rather than just being my 'notes' I've developed this towards being a draft 'XR Accessibility Primer' document. [1]

There have been some really useful comments by Jason already, and I will iterate the doc accordingly.

We are all trying to get to grips with understanding the mechanics of this space, so anything you can share to improve this doc, and our collective understanding is much appreciated (I'm looking forward to input from Matthew especially!).

All comments and thoughts welcome, especially for glaring emissions/gotcha's etc.

[1] https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.w3.org%2FWAI%2FAPA%2Fwiki%2FXR_Accessibility_Primer&amp;data=02%7C01%7Cjjwhite%40ets.org%7Cb607cc6a59b94ef50de008d74bd503dc%7C0ba6e9b760b34fae92f37e6ddd9e9b65%7C0%7C0%7C637061250574986977&amp;sdata=t5pk3MS5VfuU24jROkc%2BwZ3fUvwxPXOnrZ3f3b25HWM%3D&amp;reserved=0

Thanks

Josh

--
Emerging Web Technology Specialist/Accessibility (WAI/W3C)



________________________________

This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.


Thank you for your compliance.

________________________________
Received on Tuesday, 8 October 2019 13:24:52 UTC

This archive was generated by hypermail 2.4.0 : Tuesday, 17 January 2023 20:26:46 UTC