Re: AI and the future of Web accessibility Guidelines

Hi everyone,

The chairs have discussed this topic and the points raised to consider how we can productively incorporate it in future.

The summary is: We don’t know what will be possible, or how available AI tools will be to people with disabilities. Therefore, we should allow for improved tools and user-agents that include AI features, but we cannot yet rely on them.

Overall, the potential leap forward in user-agent ability using machine learning, AI and other variations highlights why we wanted to take the ‘outcome’ approach. Defining the outcome for the end-user with different ways to achieve it.

Also, we cannot treat “AI” as one topic. There are many types and approaches to using this variety of tools, some may work better (for accessibility purposes) than others, some might become freely available whilst others remain out of reach for most people.

Absolute statements such as “it will never work”, and “AI will be better than people at X” are not helpful to the conversation because it is very unlikely to be an absolute result in the end. Different contexts, different machine-learning approaches, and different data-sets will produce different results.

This is not intended to stifle discussion on the topic, it is just to announce that in meetings and our work we will be careful not to get side-tracked.

AI tools should come into discussions such as accessibility-supported and conformance approach, but we will have to treat it as a possible future, rather than forgone conclusion (in either direction).

Kind regards,

-Alastair


From: Gregg Vanderheiden RTF <gregg@raisingthefloor.org>
Date: Thursday, 4 April 2024 at 08:03
To: GLWAI Guidelines WG org <w3c-wai-gl@w3.org>
Subject: AI and the future of Web accessibility Guidelines


I think much of our work is not forward-looking.

We will soon have AI that can do a better job of text alternatives than humans can for example.
And then it is unclear why we would require authors to do all this work.
This applies to a LOT of things.

I think maybe we should be considering a new structure to our requirements

Need:  When people cannot see a picture clearly or at all - it is important that they be able to percieve the information presented in the picture by having it presented in another form such as text, speech or braille.  If it is in e-text it can be easily converted into any sensory form  (visual text, speech, braille or sign).

Outcome:  Where the publicly available AI is not able to generate a [good] text alternative for pictures, then an author-generated text alternative is provided.


This does contain the word [good] since we don’t want this to apply before it is ready —  and it certainly is not ready today.
but I would bet even money (or 3 to 1 money) that before WCAG 3 is out — autogenerated text alternatives will be better than 80%-90% of humans in a controlled test of humans vs AI in describing pictures.   Even the intent of pictures  (though sighted people have only the picture to guess the intent from so it is not clear why blind people can’t guess the intent).   ALSO  - auto-descriptions can provide layered descriptions — and even queryable descriptions.


  *   Picture of woman playing voilin

     *   Woman is seated and wearing formal gown

        *   woman has darker skintone, black hair worn long and appears to be around 30-40 years old

     *   Query - what kind of formal dress?
     *   Query  - what kind of chair
     *   Query - tell me more about their hairstyle
     *   Query - tell me more about the backgound of he picture

the queryable alternatives are already possible today — and I’m not sure if the AI won’t be better than 80-90 of image describers by next year



We really need to think about what we are doing —  what we want to achieve — and the best way to get there.

If browser mfgrs added these capabilities to their browsers - the cost to add the capability may be less than the costs saved by of JUST THEIR OWN web authors  at their companies — much less the costs saved across all companies.


We need to talk and think

Gregg

Received on Friday, 19 April 2024 08:28:56 UTC