Re: Request for joint meeting during TPAC

Hi Janina, all,

I’m not an XR expert, but I've been looking into this and have some background info and a couple of suggestions for discussion points. I don’t have much to add to the insightful comments included in and linked from your invite to the Accessibility Object Model (AOM) team [0] and the XR review discussion [1], but hope that the below may be of use.

1. I agree that the importance of providing some sort of semantic/structural info is paramount. There are three ways I can think of that this may help people explore a scene...

  a. more “traditional” exploration of the accessibility tree with “classical” ATs like screen-readers, to

  b. in-world exploration such as line-of-sight (think “explore by touch“ as provided on mobile devices, with an extra dimension—this is similar to how we gave players information in AudioQuake and it was Luke Wagner, of WASM fame, who suggested during a natter at the W3C Workshop on Web Games that that might be a nice way to explore in XR). But there’s also…

  c. using the accessibility information, along with known details about the user’s capabilities, in order to select appropriate rendering methods and effects to enhance the user’s understanding of—and immersion in—the scenes.

2. Microsoft’s SeeingVR tools provide a great example of several of the things we've been discussing. These are plug-ins for Unity that provide post-processing filters. Some require no intervention from the content author; others are more parameterised and thus do require some input. I found an article about SeeingVR [2] which links to a CHI 2019 paper (PDF) [3]. Also, the idea of implementing ATs within the XR authoring tool/generated world reminds me of the person who implemented a screen-reader as a Unity plug-in [4]. They're effectively embedded ATs, so there's no AOM connection, but it may be useful background info.

3. Scope of possible discussion with the AOM team:

  a. I think there could be a lot of parties involved in accessibility/semantics and XR: AOM, XR, ARIA, browser, OS and AT developers. My simplistic understanding is that AOM provides a means to introspect and modify the accessibility tree (the fundamentally awesome concept being to treat the tree as potentially distinct from the DOM, though it's most powerful when combined with it). Anyway, if new types of accessibility, or other semantic data, are to be attributed to parts of an XR scene, that seems to me to be out-of-scope of AOM. The semantics would need to be specified on a larger scale, with all of those groups taking part.

  b. Of course some of the AOM team are browser developers, and I am sure they'll have very useful input on the semantics. I'm still thinking on what could be discussed specifically with AOM to start exploring how feasible this is; maybe it's as simple as asking them if they've thought about it? I did find one sort-of-relevant issue on their GitHub repo [5], but it's about UI semantics within VR, not semantics /of/ the VR.

  c. Inviting such a potentially huge specification effort sounds like scary scope creep. It probably isn't feasible right now to specify semantics that could be used to describe a virtual world in detail—again, I imagine accessible rendering techniques could be best suited for much of this. A more focused goal of describing things like where the objects are (as has been discussed in APA before, re coordinate systems) and possibly how they're related (I'm imagining a tree, but that's just my default setting) could be good, though. Not to say all that info would be exposed to the user/visitor/player all the time. The reason for focusing on these things is that people need to be able to navigate, which this could help with, and they need to be able to understand, which the creative content/rendering choices could afford. Maybe asking the AOM team their thoughts about this specifically could bear fruit?

  d. Various 3D audio-games have done things like invent the notion of a guide-like character who you're working or exploring with, and whom you can follow, in order to aid with navigation. This, especially in a game, is far more immersive and fun than following an abstract and utilitarian AT, but both could benefit from some underlying semantic navigation info (in fact, the "bots" that feature in many games for players to co-operate or compete with already make use of similar info, so /something/ like this exists in games!)

I have also been thinking quite a bit about the idea from last week, about the possibilities of two-way communication between content and ATs, though my research and thoughts on it are not XR-specific, so will close here, and leave that as a separate matter for now.

best regards,


Matthew

[0] https://lists.w3.org/Archives/Public/public-apa/2019Aug/0084.html
[1] https://www.w3.org/2019/08/28-apa-minutes.html#item04
[2] https://www.microsoft.com/en-us/research/blog/advancing-accessibility-on-the-web-in-virtual-reality-and-in-the-classroom/ 
[3] (PDF) https://www.microsoft.com/en-us/research/uploads/prod/2019/01/SeeingVRchi2019.pdf
[4] https://assetstore.unity.com/packages/tools/gui/ui-accessibility-plugin-uap-87935
[5] https://github.com/WICG/aom/issues/139
-- 
Matthew Tylee Atkinson
--
Senior Accessibility Engineer
The Paciello Group
https://www.paciellogroup.com
A Vispero Company
https://www.vispero.com/
--
This message is intended to be confidential and may be legally privileged. It is intended solely for the addressee. If you are not the intended recipient, please delete this message from your system and notify us immediately.
Any disclosure, copying, distribution or action taken or omitted to be taken by an unintended recipient in reliance on this message is prohibited and may be unlawful.

Received on Wednesday, 4 September 2019 17:16:14 UTC