- From: Guy Hickling <guy.hickling@gmail.com>
- Date: Wed, 20 Dec 2023 21:39:05 +0000
- To: WAI Interest Group discussion list <w3c-wai-ig@w3.org>
- Message-ID: <CAAcXHN+7Tt-m5kvZF+7HUkhXPKwht77C2nOiruL3f-0ETNqY6g@mail.gmail.com>
You are quite write to check if what ChatGPT is true before using it. AI tools like ChatGPT are very questionable at the best of times. They are very useful as tools, but all they do is draw information from a range of other sources, and the information they gather are just a reflection of those unknown sources. So I would not class anything obtained that way as "definitive"; much better to track down the original sources and (if they are reputable), quote them instead. AI tools can pick up wrong information. Perhaps more dangerously, they are being found to keep many of the same prejudices and preconceived ideas that the majority of people do - I have seen comments to that effect in the world of research. That is worrying if they promote anti-equality views. (I think it was in Nature magazine where I recently saw a comment by researchers to that effect.) On that note, I saw this from a researcher last week: "Bias too, as well is always going to be a really overriding factor and something to really consider around accessibility because we do know that these tools can be prone to reinforcing the bias from their datasets.Those datasets, particularly when they've been scraped from the Internet, such as in the creation of things like ChatGPT, we know they're from primarily the Western English side of the internet, and they'd be dominated by largely neurotypical writing. I know neurodiversity specialist Liz Chart Hall, who has done some excellent talks on using GenAI tools with neurodivergent students. They emphasised that AI is not a neurodivergent thinker. I think that's really important to keep in mind that the outputs can reinforce those biases...." *Using AI for accessibility* Only yesterday I came across a fascinating case where an accessibility consultant, Joe Watkins, did a random test to find out how reliable an AI tool might be on our subject. His original question elicited several points of bad advice from it. He had to poke it and prod it, using several supplementary questions, before he could beat it into submission to give the right answers! ChatGPT recommended a placeholder for an unlabelled field, for instance, instead of a label. In fact, it disputed the need for a label. Twice it actually apologised (which seems a very human touch, I have to say!), when challenged on things, for giving wrong information. But in both cases it still refused to back down completely and insisted it's solution should be used in addition to the correct solution! It also displayed a pronounced reliance on ARIA rather than HTML, and argued the toss when Joe quoted the First Rule of ARIA. Joe Watkins' conversation with ChatGPT can be seen at https://intopia.digital/articles/using-chatgpt-to-make-chatgpts-experience-more-accessible/. Have a read of it, it's very interesting! The problem with this kind of thing, of course, is that Joe is an expert who knew the supplementary questions to ask to eventually get the right answers. The average user would not know the questions to ask and would probably take ChatGPT's misleading information as definitive. Regards, Guy Hickling
Received on Wednesday, 20 December 2023 21:39:23 UTC