Re: AI catfishing [was Re: ChatGPT and ontologies]

On 10/02/2023 18:01, David Booth wrote:
> On 2/9/23 06:43, Dave Reynolds wrote:
>> . . .
>> https://www.epimorphics.com/writing-ontologies-with-chatgpt/
> 
> Nice post!  I agree with the potential usefulness (and limitations) that 
> you observe.  But I cannot shake my overriding concern that AI like 
> ChatGPT will be abused as a spammer's or parasitic website owner's 
> dream, to flood the web with 1000x more 
> plausible-sounding-but-misleading-or-wrong crap than it already has, 
> thus making it even more difficult to find the nuggets of reliable 
> information.  AI is a major force multiplier.  As with any other force 
> multiplier, it can be used for good or bad.

Exactly so. There's dangers in people thinking the output might be 
accurate when it's not, and separate dangers in people deliberating 
misusing it to generate floods of such content.

> I personally think we need legislation against AI catfishing, i.e., AI 
> *pretending* to be human.
> 
>   - AI-generated content should be clearly labeled as such.
> 
>   - Bots should be clearly labeled as such.

A worthy aim though I'm sceptical any such legislation could be usefully 
enforced.

An additional tool is education. Training people how to more critically 
interpret what they read.

AI assistants are going to a fact of life in the future and skills to 
use them well and safely will be needed. Irrespective of what can be 
done to limit deliberate misuse.

Dave

Received on Saturday, 11 February 2023 10:11:01 UTC