Re: AI and the future of Web accessibility Guidelines

That is what inspired me to write this today — yes

But it is a much broader topic that I have been meaning to open as a discussion point for some time. 

In fact I think it is something that we need to be spending much more time on — iN PARALLEL with WCAG3

I believe that this approach will have 1000% or greater impact on people with cognitive, language, and learning disabilities  than anything we can do through WCAG3 itself. 

IT is not possible today though — so we can’t stop what we are doing and wait for it.

but we CAN

Make sure that what we write supports it when it works for real  — and doenst confuse or stand in the way of better
Do what we can to define what those tools should actually DO — and encourage their development
(the COGA doc is a great start there for the cognitive, language, and learning disabilities  aspects) 


Gregg




> On Apr 4, 2024, at 4:22 AM, Bradley Montgomery, Rachael L <rmontgomery@loc.gov> wrote:
> 
> Hello Gregg,
> 
> (chair hat off)
> 
> I believe we are talking and thinking about this.  To add context, I believe this is in response to the github thread on the new outcomes <https://github.com/w3c/wcag3/discussions/60>.
> 
> You may notice that the outcomes are written without specifying how they will be met. Wording choices like "are available" allows for solutions (methods) that can include AI, assistive technology, etc. The structure's focus on outcomes and  wording choices like "are available" are intentional choices to allow WCAG 3 to adapt as the landscape changes.  
> 
> The methods are the technology specific parts and those can shift over time. So right now a passing method may be the author adds text alternatives. In 10 years, a passing method may be that an author doesn't override AI based tools that generate alternatives.
> 
> The current exploratory draft of the text you are referring to is "Text alternatives are available for non-text content that conveys context or meaning." I expect this will be reworded but one point may be to drop the word "text" so that we don't assume text is the delivery mechanism. So perhaps we end up with something more like "Alternatives are available for non-text content that conveys context or meaning." While the wording will absolutely evolve, I think it still is future looking by allowing for evolving methods that reach the outcomes. 
> 
> 
> 
> Kind regards,
> 
> Rachael
> From: Gregg Vanderheiden RTF <gregg@raisingthefloor.org <mailto:gregg@raisingthefloor.org>>
> Sent: Thursday, April 4, 2024 3:02 AM
> To: GLWAI Guidelines WG org <w3c-wai-gl@w3.org <mailto:w3c-wai-gl@w3.org>>
> Subject: AI and the future of Web accessibility Guidelines
>  
> CAUTION: This email message has been received from an external source. Please use caution when opening attachments, or clicking on links.
> 
> 
> I think much of our work is not forward-looking.   
> 
> We will soon have AI that can do a better job of text alternatives than humans can for example.     
> And then it is unclear why we would require authors to do all this work.    
> This applies to a LOT of things.   
> 
> I think maybe we should be considering a new structure to our requirements 
> 
> Need:  When people cannot see a picture clearly or at all - it is important that they be able to percieve the information presented in the picture by having it presented in another form such as text, speech or braille.  If it is in e-text it can be easily converted into any sensory form  (visual text, speech, braille or sign).
> 
> Outcome:  Where the publicly available AI is not able to generate a [good] text alternative for pictures, then an author-generated text alternative is provided.
> 
> 
> This does contain the word [good] since we don’t want this to apply before it is ready —  and it certainly is not ready today.
> but I would bet even money (or 3 to 1 money) that before WCAG 3 is out — autogenerated text alternatives will be better than 80%-90% of humans in a controlled test of humans vs AI in describing pictures.   Even the intent of pictures  (though sighted people have only the picture to guess the intent from so it is not clear why blind people can’t guess the intent).   ALSO  - auto-descriptions can provide layered descriptions — and even queryable descriptions. 
> 
> Picture of woman playing voilin
> Woman is seated and wearing formal gown
> woman has darker skintone, black hair worn long and appears to be around 30-40 years old
> Query - what kind of formal dress?
> Query  - what kind of chair
> Query - tell me more about their hairstyle
> Query - tell me more about the backgound of he picture
> 
> the queryable alternatives are already possible today — and I’m not sure if the AI won’t be better than 80-90 of image describers by next year
> 
> 
> 
> We really need to think about what we are doing —  what we want to achieve — and the best way to get there.
> 
> If browser mfgrs added these capabilities to their browsers - the cost to add the capability may be less than the costs saved by of JUST THEIR OWN web authors  at their companies — much less the costs saved across all companies. 
> 
> 
> We need to talk and think
> 
> Gregg

Received on Thursday, 4 April 2024 15:38:07 UTC