Re: Is it time for AI with Accessibility?

I have been advocating for this for some time.   Here are a couple of pieces. 

I actually think this is the future of accessibility. 

Attached is one doc 
And also covered in this keynote





HCII Keynote - July 2020

Gregg Vanderheiden delivered the keynote address at the 22nd International Conference on Human-Computer Interaction on July 21, 2020.  He highlighted a growing gap between current UI and UX design and people who have low “digital affinity,” and proposed an alternate approach to accessibility and extended usability for next-next generation UI/UX. Insight into this problem came from work at the Trace R&D Center on development and pilot testing Morphic, an open-source extension to the Windows operating system intended to make computers easier for people to use. 

 Summary of keynote   
https://drive.google.com/file/d/1A1dFCAS1VA6ypwHK6kbgaIQ_OFG8RBM4/view?usp=sharing <https://drive.google.com/file/d/1A1dFCAS1VA6ypwHK6kbgaIQ_OFG8RBM4/view?usp=sharing>


 Link to the Keynote
https://drive.google.com/file/d/1Udg7S_SmfhKDFv_u58nzFHXAYnBLawBD/view?usp=sharing <https://drive.google.com/file/d/1Udg7S_SmfhKDFv_u58nzFHXAYnBLawBD/view?usp=sharing>



I have also secured a funded project to look at Next-Next-Generation User interfaces - and the challenges they will provide — and work on developing a research agenda to address them. The Info-Bot approach will be part of that. 

Jonathan, if you or others have or see information that can contribute to this approach - (AI for accessibility)  please let me know.   Greatly appreciate it.


Best 


gregg

———————————
Professor, University of Maryland, College Park
Director , Trace R&D Center, UMD
Co-Founder Raising the Floor. http://raisingthefloor.org <http://raisingthefloor.org/>
And the Global Public Inclusive Infrastructure (GPII) http://GPII.net <http://gpii.net/>




> On Dec 8, 2020, at 12:02 PM, Jonathan Avila <jon.avila@levelaccess.com <mailto:jon.avila@levelaccess.com>> wrote:
> 
> Apple has been working on screen component detection (https://9to5mac.com/2020/12/03/ios-engineer-details-apples-approach-to-improving-accessibility-with-ios-14/ <https://9to5mac.com/2020/12/03/ios-engineer-details-apples-approach-to-improving-accessibility-with-ios-14/>) and some accessibility vendors are also doing the same.  While many of us have not shared our own research and efforts on this topic publicly - some companies like overlay vendors are publicly claiming to be using the technology https://accessibe.com/product/artificial-intelligence <https://accessibe.com/product/artificial-intelligence>.  I can't speak for the validity of these public claims.
> 
> Jonathan
> 
> -----Original Message-----
> From: Charles 'chaals' (McCathie) Nevile <chaals@yandex.ru <mailto:chaals@yandex.ru>> 
> Sent: Tuesday, December 8, 2020 5:05 AM
> To: w3c-wai-ig@w3.org <mailto:w3c-wai-ig@w3.org>
> Subject: Re: Is it time for AI with Accessibility?
> 
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
> 
> 
> On Tue, 08 Dec 2020 07:20:48 +1100, Wayne Dick <wayneedick@gmail.com <mailto:wayneedick@gmail.com>>
> wrote:
> 
>> I am interested in any research in this direction. Anybody know about 
>> anything like this in progress?
> 
> Hello Wayne, all.
> 
> I went to a presentation in New Zealand in the early 2000s, at the invitation of Graham Oliver, on a project that had been running for quite some years (if I recall correctly, since the early 90s) to do this.
> 
> I no longer recall enough to easily find it (and I have looked for it before without success).
> 
> The basic idea was to use machine learning systems to look at the interface of a user's computer, and provide a personalised approach to understanding the components. Initially the system used a very expensive high-powered computer to read the interface of a standard desktop PC, but as increasing power became available, it was slowly morphing toward software running directly on the machine.
> 
> I also recall that a large part of the explanation about automatic visual recognition used jet fighter planes as the example object to follow.
> 
> In my mind the project may have been associated with Stanford University, and it may have been called Eureka, although that is widely used as a name, so not a very helpful search term :(
> 
> If this rings a bell with anyone I would love to find more pointers to the work.
> 
> Cheers
> 
> Chaals
> 
> --
> Using Opera's mail client: http://www.opera.com/mail/ <http://www.opera.com/mail/>
> 

Received on Tuesday, 8 December 2020 17:16:52 UTC