- From: Jim Tobias <tobias@inclusive.com>
- Date: Thu, 17 Sep 2015 21:03:07 +0000
- To: Phill Jenkins <pjenkins@us.ibm.com>, Gregg Vanderheiden <gregg@raisingthefloor.org>
- CC: IG - WAI Interest Group List list <w3c-wai-ig@w3.org>, Lisa Seeman <lseeman@us.ibm.com>
- Message-ID: <BY2PR01MB54054E5AD52E9EC7B8AC7EAC35A0@BY2PR01MB540.prod.exchangelabs.com>
I agree with Phill that we do know how to do a lot of it today so we don't have to wait until we know how to do all of it. Many of the current techniques are effective and science-based. So maybe some of them could begin to be thought of as Success Criteria within certain conditions. There's an awful lot going on in the Plain Language movement - even inside the US govt. - that's worth teaming up with. I have a non-technological point to make. We agree that 'information' is highly contextual: Who is the audience? What is the environment of use? What does each user bring to the encounter in terms of skills and expectations? What are users expected to come away with in terms of cognition and action? The whole notion of 'author's intent' as modified by the concept of 'reader response' is an ongoing discourse analysis debate that perhaps we could usefully tune into. But for now we seem to agree that there are no comprehensive technological solutions to this complex contextuality. For now we can only rely on what authors do both in terms of addressing the cognitive skills and needs of as wide a user base as possible (given the purpose of the author's effort) and in terms of giving users some choices. They know or should know their audience and the domain so why not focus on assistance to authors? Giving them markup and an explanation is eminently feasible. > We don't know how to do this today I think we do know how to do a lot of it today. When I view the challenge or problem in a two or three dimensional matrix, there is a lot I see we can delivery, or at least work on today: 1. We have technologies that change the modality of the content from text to audio via TTS, voice recognition to auto create text captions, even experimental text to ASL avitars 2. We have device capabilities and formats with smart phones, tablets, desktop, various size displays and output devices including Refreshable Braille Displays. 3. We have experimental image recognition technologies and advanced OCR 4. We have visual/text presentational transformational technologies: line spacing, word and character spacing, color and contrast, font style, etc. in platforms, browsers, and plug-in and cloud delivered AT. 5. We have expermental summarization technologies 6. We have emerging translation (e.g. German to English) technologies 7. We have stable authoring/developer guidelines such as the 38 Success Criteria in WCAG 2.0 Level A and AA. 8. We have tablet based "AT like" unique solutions (apps) being delivered today to people with cognitive disabilities for things like rehab and job training. so, a. If you are narrowly referringto the space of taking any block of random text from the web and converting it into various levels of simplier blocks of text, we do have experimental summarization technology, so we have at least one level of transformation. b. We could invent a tag or attribute for marking up at least 6 levels of language comprehension if someone wanted to provide various level of the block of text by hand for further studies of effectiveness. c. I don't think we are ready to propose any new "Success Criteria" that would apply to "all" content. But perhaps there is room for a new Level AAA Success Criteria, but I've not thought that through yet. There is other related work in this space, so it would be good to connect and not duplicate our precious resources: Cognitive and Learning Disabilities Accessibility Task Force http://www.w3.org/WAI/PF/cognitive-a11y-tf/work-statement Coleman Institute for Cognitive Disabilities http://www.colemaninstitute.org/ we = IBM Research, browsers, platforms, University research programs, Coleman Institute, Google, Apple, Microsoft, AT vendors, etc. etc. ____________________________________________ Regards, Phill Jenkins, IBM Accessibility
Received on Thursday, 17 September 2015 21:03:42 UTC