RE: Comments: accessible real-time communication - use cases

Thank you, Josh. The draft to which I am referring is the one that Dom cited recently in his comments: https://tools.ietf.org/id/draft-holmberg-mmusic-t140-usage-data-channel-00.html


-----Original Message-----
From: Joshue O Connor <joconnor@w3.org>
Sent: Wednesday, September 11, 2019 8:11 AM
To: White, Jason J <jjwhite@ets.org>; public-rqtf@w3.org
Subject: Re: Comments: accessible real-time communication - use cases

Super useful feedback Jason! I've implemented many of your suggestions and updated the document accordingly.

I actually merged the 'use cases' section so it all fits under 'user needs' - it was like that really to cover technical use cases that where not directly a 'user need' but would support same.

In this doc I think its fine that they are together and you are right that it reads better.

One thing l'm not sure of, so please clarify.

> Support for Real Time Text: the notes should be updated to reflect the IETF draft on real-time text in WebRTC. (This is in addition to the possibility of moving the notes into a separate table or section).
Which draft are you referring to here? Is it RFC 5154? [1]

Tat is heavily referenced already so let me know.

Thanks and you excellent input is appreciated.

Josh

[1] https://nam01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.rfc-editor.org%2Frfc%2Frfc5194.txt&amp;data=02%7C01%7Cjjwhite%40ets.org%7C4cdacfefb63b42066b4c08d736b12654%7C0ba6e9b760b34fae92f37e6ddd9e9b65%7C0%7C0%7C637038006801242499&amp;sdata=I8y7lgrC8vPJULRf9WzUzDwbqrdHL2BIifsFhvw4JWI%3D&amp;reserved=0



On 26/08/2019 18:53, White, Jason J wrote:
> Responding to the draft at
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.

> w3.org%2FWAI%2FAPA%2Fwiki%2FAccessible_RTC_Use_Cases&amp;data=02%7C01%
> 7Cjjwhite%40ets.org%7C4cdacfefb63b42066b4c08d736b12654%7C0ba6e9b760b34
> fae92f37e6ddd9e9b65%7C0%7C0%7C637038006801242499&amp;sdata=0oMjqiuF9tM
> w2FDxo%2F9oOula1EJlDjdJT0GdwSWVOjk%3D&amp;reserved=0
>
> In the User Needs and Scenarios section: there are notes here regarding which issues may be covered by which specifications. I suggest moving this material out into a table or into a separate section. Some of these notes were apparently written when the document was focused on WebRTC, and should be revised now that the document has a broader ambit.
>
> Incoming calls: it would be useful to clarify the accessibility issues (e.g., alerting of assistive technologies via relevant APIs). Wouldn’t this be entirely solved by implementing WCAG 2.1? Should we remove scenarios that are straightforward applications of WCAG – or, at least, clarify why they aren’t?
>
> Accessible call set up: I’m not sure whether there’s an issue that extends beyond ordinary user interface accessibility/WCAG implementation here. Is the claim that users need the option of choosing call recipients from a list (as opposed to entering names/addresses/telephone numbers directly)? This can only be done if the system already has a list of possible recipients. For adding/removing users from the call, the accessibility issue appears to arise when relay or similar services are invoked, but that’s addressed elsewhere in the document. I think this scenario should be removed or rewritten to clarify the accessibility-related needs that are not addressed elsewhere.
>
> Dynamic audio description values: this reads as though it’s a video/multimedia scenario rather than a real-time communication scenario that happens to involve (presumably live) video, such as in a teleconference. I suggest rewriting and clarifying.
>
> Audio subtitling: again, this is written as a “media player” requirement rather than as a real-time communication requirement. Also, it describes a failure of accessibility, whereas it should describe the scenario in which the accessibility need is met.
>
> Text communication data channel: given that braille displays attached to host computers are controlled by a “screen reader” of some sort, this text, as written, is unclear and confusing to me. It appears to be suggesting simultaneous braille and spoken output from the same screen reader, but used for different purposes. I suggest clarifying and amplifying the scenario. Is it really a focus handling issue, in which the braille display should be tracking one text log and the spoken output another? Is it an ARIA issue?
>
> Control relative volume and panning position: I would suggest clarifying how this might work, and why it would be useful in an RTC context, making the scenario more concrete.
>
> Support for Real Time Text: the notes should be updated to reflect the IETF draft on real-time text in WebRTC. (This is in addition to the possibility of moving the notes into a separate table or section).
>
> Text (RTT) relay services aren’t mentioned here, but probably should be.
>
> Warning and recovery of lost data: how does this differ from a requirement applicable to all users generally, who would appear to have an equally strong need to be alerted to network problems? What’s the accessibility issue here? Is it that the alert needs to be provided – and available in text? If so, this is a general UI accessibility concern addressed by WCAG.
>
> Call status data and polling: if this functionality is provided by the application, then taking care of its accessibility would seem to be a straightforward application of WCAG. The point, then, appears to be that this functionality is important – perhaps more so to some users with disabilities than to the population generally. I think the point of the scenario and its rationale should be clarified.
>
> Second scenario (call status): as the note suggests, isn’t this covered by WCAG (e.g., “busy signal” can’t be audio-only)?
>
> Assistance for older users or users with cognitive disabilities: as the note suggests, this needs to be clarified. Is it a UI accessibility issue, or a request for human assistants to be available to intervene in calls? The latter would seem to be addressed by the same technical measures needed for inviting a relay service into a conversation.
>
> Personalised symbol sets: I agree with the note, this is a generic UI issue.
>
> Identify caller: I suggest merging with the “incoming calls” scenario, and considering whether there’s anything here beyond a general UI accessibility concern.
>
> Live transcription and captioning: consider merging with the earlier video-related scenarios and clarifying.
>
> “Use Cases” section: the distinction between this section and “User needs and scenarios” isn’t clear, and only makes the document more confusing. I would suggest having only one kind of section that addresses scenarios and requirements, thus eliminating “use cases” as a separate section and merging the material into the preceding text. I think the actual discussion of real-time text and instant messaging-style interfaces could usefully be written as a scenario along the lines of other scenarios in the document.
>
>
>
> ________________________________
>
> This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.
>
>
> Thank you for your compliance.
>
> ________________________________

--
Emerging Web Technology Specialist/Accessibility (WAI/W3C)


________________________________

This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.


Thank you for your compliance.

________________________________

Received on Wednesday, 11 September 2019 12:33:13 UTC