Re: Accessibility at the W3C Workshop on Web Games - feedback and questions

Hi Joshue, Jason,

Sorry for my laggy reply; we've been on holiday :-). Thanks Jason for the link in your email to the WebAssembly WebIDL bindings proposal, though it seems they've re-organised the repo since, and the URL now appears to be <https://github.com/WebAssembly/interface-types/blob/master/proposals/interface-types/Explainer.md>.

Thanks for your comment on the Level Description Language (LDL) paper; glad you enjoyed it. I'm working on making it run in a more user-friendly manner on contemporary systems [1]. Would love to see someone with better geometry skills apply the same approach on newer engines :-).

You had a few questions; I'll see what I can do to start answering them...

> How can we know which runtime environment, rendering or VM machine environment - when used as a platform for gaming or XR applications, provides the best architecture for accessibility and is sympatico with existing AT?

There are a few different levels of environment we might consider:

 * Underlying platform (native app, native mobile app, console, browser).
 * Engine/middleware (e.g. Unity, Unreal, ...).
 * Game-specific code (such as UI code).

In terms of platforms where there /is/ an accessibility layer provided by the system (e.g. UI Automation on Windows, or ARIA in browsers) I think those accessibility layers generally have similar semantics, so it seems to me that the goal is just making sure that bridge to the game itself is there.

Existing accessibility layers are geared towards access to the UI, so they could help with CVAA (and similar) compliance and getting into a game. When in the game, it's much more game-dependent as to whether the use of different rendering types/effects might be enough to convey the game to players, or whether any further semantic info may be needed.

But (and I think this may be the central part of what you're asking) what about the games—are they coded in such a way that this model of object-oriented UI accessiblity we're used to actually fits?

It's my expectation that UI code in games will have become more object-oriented over time, and probably even library code is provided by the engines/middleware out there. Assuming that the UI code is relatively object-oriented, it could match up quite nicely with the DOM/ARIA approach. This is something I'm looking forward to exploring now I'm back at the keyboard.

How close that match is would determine how much work the accessibility code (whether added as a library to the game's codebase before compilation to WebAssembly, or in a JS library in the browser) will need to be. It doesn't feel like there's anything insurmountable here; I'll report back when I've gained some experience with this.

> Which has the best potential for semantic support and communication with platform and browser APIs?

I think I've strayed into this topic above; let me know if I've not answered it clearly.

> You mention glTF, and I'm not totally sure how that fits into the stack? It doesn't seem to be a full browser runtime environment like WebAssembly, but enables the loading of 3D scenes and models. So my question is around your references to the benefits of 'machine-readable applications' and how this could be good for accessibility? Do glTF files have inherent support for object description or other meta-data can provide an accessibility architecture when loaded? I just don't know much about this.

I beleive you're right in that glTF prescribes a minimal environment enough to display the model/scene, in a smilar way to the HTML <video> or <audio> elements might work: the browsr provides the UI and does the rendering.

My understanding is that there are optional extra payloads that could be sent along with the textures and the model. These may include e.g. a JSON file that provides some semantic information on the model.

I think that discussion is/will be on-going between WebXR and Kronos on what information that might be.

(I'm a little tight for time at the moment, so haven't looked up the spec, but if I can find any further info/clarfication on this, I'll follow up later.)

> Any finally, ..any more info you have on 'semantic-scene graph' modelling would be really helpful *grin.

Very essentially, the idea of a semantic scene graph would involve including semantic info (of course, convention would dictate the meaning, so I can't give any concrete examples yet) behind the rendered scene along with it, so it could perhaps be explored in a more mechanical manner. E.g. if a series of shapes together were intended to form a teapot, these could be grouped within the scene and labelled thusly. A-frame has a similar approach of using the DOM to create the scene graph rather than having it as an opaque structure in memory.

Luke Wagner also mentioned the idea of projecting imaginary lines from the player's perspective into the scene and, should the scene contain sufficient semantic info, this could be presented as the player explores. We actually did something very similar to provide AudioQuake players information on their environment (there we use sounds to indicate certain environmental features).

Hope this helps—do let me know if I can provide any further clarification or info.

best regards,


Matthew

[1] https://github.com/matatk/agrip
-- 
Matthew Tylee Atkinson
--
Senior Accessibility Engineer
The Paciello Group
https://www.paciellogroup.com
A Vispero Company
https://www.vispero.com/
--
This message is intended to be confidential and may be legally privileged. It is intended solely for the addressee. If you are not the intended recipient, please delete this message from your system and notify us immediately.
Any disclosure, copying, distribution or action taken or omitted to be taken by an unintended recipient in reliance on this message is prohibited and may be unlawful.

Received on Wednesday, 21 August 2019 15:28:45 UTC