Re: W3C Workshop on Web Games - Position Paper

Thank you Steve, Joshue and Léonie for your constructive feedback, questions and suggestions. Here's an update for everyone...


  1. Ian Hamilton has suggested some great edits to address comments and questions from the last W3C APA group meeting; I have incorporated these into the text on the APA wiki [1].


  2. Steve: good question about the HID mapping - I'm doing a bit of research on this to see how they're handled now.


  3. Joshue: I'm not aware of any existing engine that has direct support for exposing information to AT (this is now clarified by Ian's amendments to the position paper). The first step that seems natural to me would be to use existing ARIA roles for making the game's UI accessible and try to get those exposed in a uniform way. Whilst the game is essentially just a 2D canvas, I imagine that, should the Accessibility Object Model get supported, that could provide a bridge between the semantic-less canvas and the ARIA roles that would describe the nature of the UI.

  There are some OS-level assistive technologies that sort-of work with games (like zoom on a mobile device), but they work because they completely bypass the game (i.e. zoom will zoom the screen just before it goes to the display), so the game isn't aware of them.

  I have some thoughts on exposing the semantics of the game, below.

  Hoping that the time is now right, and the Active Game Accessibility proposals can contribute to the discussion.

  I tried to get AudioQuake working again on the latest macOS, mainly for the "level description language" tools, but OpenGL is now deprecated there and isn't running properly for me, so it's very-much audio-only (which isn't bad, but makes it less accessible to some). Might try it on Windows.


  4. Léonie: the task of creating a taxonomy for the in-game universe sounds like an interesting challenge, but I agree the scope creep could essentially be infinite [though I am wondering if there’s a more fundamental level on which it could be expressed, where that wouldn’t be the case]. It seems to me that the best way to present in-game environments and objects would be to find ways to render them that’d be accessible to as many people as possible. I imagine that if such a taxonomy existed, then we (or machines) could do a great deal with that information, but I'm not sure how practicable it would be, both in terms of arriving at a taxonomy and the risk of the provision of all that info diminishing the gaming experience. Are these similar to the concerns you have?

  An example of using rendering to impart information would be using spatial sound to indicate where something's coming from, or to provide an environmental echo/reverb effect that lets the player know they’ve wandered into a cavern. Whilst it wouldn't be possible to provide all information to all people all of the time, the key thing would be to evoke the feelings that the game's setting is trying to convey, and provide enough cues to allow it to be navigated, so it's still fun.

  In a complex game (or VR) world, there may be objects the user can interact with that are naturally expressed using the user's own assistive technology. I'm thinking about a computer within a computer - say the player in the game is hacking into a machine, then the console or the GUI on the machine could easily be expressed in terms of existing ARIA semantics. A book within a game could be expressed similarly.

  Precise information on in-game objects, should it be needed, could perhaps be expressed using constructs like trees or lists. In fact even navigation around a building could be expressed as a tree, if the object of the game, or the VR experience, is not for the user to take time enjoying the journey, but simply to be somewhere. However, to be clear: if an experience is meant to be fun for some people, it should be fun for everyone; I'm not advocating having a separate, potentially less fun, accessible, version - rather just starting to wonder about potential non-game applications of VR/AR, and if we already have the infrastructure to make some of them accessible, as long as that's in line with the experience they’re trying to convey to everyone. Once the user arrives at the VR destination, even if it is by navigating a tree, we could still provide suitable environmental rendering effects to make the experience immersive. I'd be interested in hearing more about your work on this, too. This all sounds tremendously interesting and exciting.

  I think ARIA or similarly-spirited standards could be very useful for the UI of any game, and may well be helpful for providing more detailed info on games that are maybe more turn-based or analytical in nature (though good rendering/theming would be appropriate there too).


Best regards,


Matthew

[1] https://www.w3.org/WAI/APA/wiki/Web_Games_Workshop_Position_Paper
-- 
Matthew Tylee Atkinson
--
Senior Accessibility Engineer
The Paciello Group
https://www.paciellogroup.com
A Vispero Company
https://www.vispero.com/
--
This message is intended to be confidential and may be legally privileged. It is intended solely for the addressee. If you are not the intended recipient, please delete this message from your system and notify us immediately.
Any disclosure, copying, distribution or action taken or omitted to be taken by an unintended recipient in reliance on this message is prohibited and may be unlawful.

Received on Wednesday, 1 May 2019 11:20:03 UTC