Is it time for AI with Accessibility?

A new model for personalization

I have been an advocate for visual personalization since 2008. I have
fought for WCAG criteria that enforce separation of content from
presentation in an effort to impose users visual needs on extant web pages.
However, so much content is supplied at runtime or in stylesheets, that
this effort seems profoundly difficult.  Also, the approach opens security
holes by requiring internal access to web content.

Maybe what we need is an AI approach. What if we analyze the image of web
pages as blocks of content that need classification into something like
HTML elements. This process may similar to OCR or voice recognition, only
the data should be more regular.

Consider the application to low vision. This group needs page restructuring
(like linerization), color control, spacing, reflow of text, and
enlargement that exceeds most rendering algorithms.

The difference between OCR and this structure recognition would be that we
always have an image of the intended page from the runtime HTML. While AI
might be too slow for live pages it would certainly be useful for books and
publications.

However, if large browsers would get in the game live delivery might be
possible. Google, for example, is a leader in deep learning. This would
eliminate the need for assistive technology to have internal knowledge of
the page structure, scripts and style.

We may be at the point where this is possible. I am interested in any
research in this direction. Anybody know about anything like this in
progress?

Best, Wayne Dick

Received on Monday, 7 December 2020 20:21:37 UTC