Re: Reflow

Hi,
I appreciate the sharing and articles on AI limitations and bias. Somehow,
the framing of this post feels like discouraging to say the least. The
person who starts the discussion are just using the tool to frame how they
think about it, they're not, at least from my understanding, saying that
something is right or wrong. They are using the chat bot to frame what's
going on in their mind  and coming back to the community for better wisdom.
I don't find issue with that. It's about, using the tool more mindfully.
The subsequent reply talking about how AI can take over the world is an
overstretch to a very narrow discussion that we are having here about
reflowable content. I feel like the reply is a bit defensive and anxious.
I wish we can have more nuance discussion about technology and again, it's
really good to ask what we accomplish from the replies that we are going to
give to a person.
The assumptions that underlie here is more about the person do not know how
to use chatgpt mindfully, I don't think it's true..

Regards,
Kavein
Kaveinthran (He/Him)
Curious, Native Blind
Part time Research Consultant in ADPAN
Disabled independent Human Rights Advocate
email: kaveinthran@gmail.com <kaveinthran@gmail.com>
twitter <https://twitter.com/kaveinthran>
My LinkedIn <https://my.linkedin.com/in/kaveinthran>


On Thu, Dec 21, 2023 at 06:12 <chagnon@pubcom.com> wrote:

> Quote: But in both cases it still refused to back down completely and
> insisted it's solution should be used in addition to the correct solution!
> EndQuote
>
>
>
> Huh.
>
> Did it go something like, “I’m sorry, Dave. I’m afraid I can’t do that”? *
>
>
>
> (* From Stanley Kubrick’s 2001: A Space Odyssey)
>
>
>
> Thanks, Guy, for this fascinating information from the industry. The
> thought that creeps into my mind is that we’re eventually going to
> homogenize our knowledge down to what AI has approved, rather than what is
> truthful or accurate. Will we lose novel new ideas, differing points of
> view, thinking outside the box, etc.?
>
>
>
> We will achieve truthiness!
>
> Whoever controls the AI algorithms will control what we humans know and
> think. Gosh, are we moving backwards to Orwell’s 1984?
>
>
>
> —Bevi
>
> *— — —*
>
> Bevi Chagnon  *| *Designer, Accessibility Technician* |*
> Chagnon@PubCom.com
>
> *— — —*
>
> *PubCom: Technologists for Accessible Design + Publishing*
>
> consulting • training • development • design • sec. 508 services
>
> *Upcoming classes* at www.PubCom.com/*classes*
> <http://www.pubcom.com/classes>
>
> *— — —*
>
> Latest blog-newsletter
> <https://mailchi.mp/e694edcdfadd/class-discount-3266574> – *Simple Guide
> to Writing Alt-Text
> <https://www.pubcom.com/blog/2020_07-20/alt-text_part-1.shtml>*
>
>
>
> *From:* Guy Hickling <guy.hickling@gmail.com>
> *Sent:* Wednesday, December 20, 2023 4:39 PM
> *To:* WAI Interest Group discussion list <w3c-wai-ig@w3.org>
> *Subject:* Re: Reflow
>
>
>
> You are quite write to check if what ChatGPT is true before using it. AI
> tools like ChatGPT are very questionable at the best of times. They are
> very useful as tools, but all they do is draw information from a range of
> other sources, and the information they gather are just a reflection of
> those unknown sources. So I would not class anything obtained that way as
> "definitive"; much better to track down the original sources and (if they
> are reputable), quote them instead.
>
> AI tools can pick up wrong information. Perhaps more dangerously, they are
> being found to keep many of the same prejudices and preconceived ideas that
> the majority of people do - I have seen comments to that effect in the
> world of research. That is worrying if they promote anti-equality views. (I
> think it was in Nature magazine where I recently saw a comment by
> researchers to that effect.)
>
> On that note, I saw this from a researcher last week:
>
> "Bias too, as well is always going to be a really overriding factor and
> something to really consider around accessibility because we do know that
> these tools can be prone to reinforcing the bias from their datasets.Those
> datasets, particularly when they've been scraped from the Internet, such as
> in the creation of things like ChatGPT, we know they're from primarily the
> Western English side of the internet, and they'd be dominated by largely
> neurotypical writing. I know neurodiversity specialist Liz Chart Hall, who
> has done some excellent talks on using GenAI tools with neurodivergent
> students. They emphasised that AI is not a neurodivergent thinker. I think
> that's really important to keep in mind that the outputs can reinforce
> those biases...."
>
>
>
> *Using AI for accessibility*
>
> Only yesterday I came across a fascinating case where an accessibility
> consultant, Joe Watkins, did a random test to find out how reliable an AI
> tool might be on our subject. His original question elicited several points
> of bad advice from it. He had to poke it and prod it, using several
> supplementary questions, before he could beat it into submission to give
> the right answers!
>
> ChatGPT recommended a placeholder for an unlabelled field, for instance,
> instead of a label. In fact, it disputed the need for a label. Twice it
> actually apologised (which seems a very human touch, I have to say!), when
> challenged on things, for giving wrong information. But in both cases it
> still refused to back down completely and insisted it's solution should be
> used in addition to the correct solution! It also displayed a pronounced
> reliance on ARIA rather than HTML, and argued the toss when Joe quoted the
> First Rule of ARIA.
>
> Joe Watkins' conversation with ChatGPT can be seen at
> https://intopia.digital/articles/using-chatgpt-to-make-chatgpts-experience-more-accessible/.
> Have a read of it, it's very interesting!
>
> The problem with this kind of thing, of course, is that Joe is an expert
> who knew the supplementary questions to ask to eventually get the right
> answers. The average user would not know the questions to ask and would
> probably take ChatGPT's misleading information as definitive.
>
> Regards,
>
> Guy Hickling
>
>
>

Received on Tuesday, 26 December 2023 10:56:38 UTC