Re: AI and the future of Web accessibility Guidelines

We also have the problem with hallucinations  of people  — and even of populations.  

I find the AIs even today to be MUCH more reliable than asking the same questions of anyone around me — or search engines.    

I check them all before relying on them.   I wish everyone would check the comments and answers given by everyone around them as well

g



> On Apr 4, 2024, at 6:52 AM, Hidde de Vries <hidde@hiddedevries.nl> wrote:
> 
>> On 4 Apr 2024, at 09:02, Gregg Vanderheiden RTF <gregg@raisingthefloor.org <mailto:gregg@raisingthefloor.org>> wrote:
>> 
>> We will soon have AI that can do a better job of text alternatives than humans can for example.     
>> And then it is unclear why we would require authors to do all this work.    
>> This applies to a LOT of things.   
> 
> (in personal capacity)
> 
> There isn't conclusive research to say that the (LLM's) problem of hallucination and utterances of falsehoods is solvable, in fact some say it is inevitable (https://arxiv.org/abs/2401.11817). 
> 
> Add to that the problems of environmental cost, bias, copyright and social issues (including the working conditions of people categorising stuff), and it seems fair to me to continue to require authors to provide text alternatives and descriptions. 
> 
> Of course, I'm not saying that should stop any users from using such tools in addition to what websites provide or fail to provide.

Received on Thursday, 4 April 2024 18:18:05 UTC