Re: AI catfishing [was Re: ChatGPT and ontologies]

One potential difference is that spam as per its intent must be "imitation
to a degree" -- ultimately the spam by design must have included a
revelation of its true intent.
But a LLM spoof intending to "imitate to the fullest degree" doesn't
necessarily have to include any revelation of the underlying intent.



On Fri, Feb 17, 2023, 8:36 AM Hugh Glaser <hugh@glasers.org> wrote:

> I disagree, David.
> The Spam-fighting arms race is an example of huge success on the part of
> the defenders.
> I see an irony that you sent this message reliably to everyone, using
> email.
> You must remember when people were saying that email would soon be
> unusable because of spam.
> The attitude to LLMs spoofing as human and destroying our socio-technical
> fabric sounds very similar.
>
> I have at least 5 main email addresses I regularly use, and make no
> attempt whatsoever to hide any from the world - I completely rely on
> different spam filters on different providers to make it useable.
> IIRC even years ago, over 99% of the incoming email to our university
> server was spam, and yet I would rarely see one, and still don’t from there.
> Yes, a few good fakes get through - but the number make us forget the
> overall success of the spam-fighting enterprise.
> Astonishing success, I would say.
>
> And as for lying, Thomas, why do you think I would have a problem with
> something I am paying for deliberately lying (if I understand what you mean
> by “lying”)?
> I mean yes, hallucination and other stuff are a problem, but that is
> exactly the interesting stuff to investigate.
>
> Let’s be Pollyanna rather than Cassandra :-)
>
> Cheers
>
> > On 17 Feb 2023, at 15:55, David Booth <david@dbooth.org> wrote:
> >
> > On 2/17/23 08:54, Thomas Passin wrote:
> >> On 2/17/2023 8:36 AM, Hugh Glaser wrote:
> >>> Has anyone tried using LLMs such as GPT-3 to find out if text is
> human- or machine-generated?
> >>> Can’t you just give it the text and ask it?
> >> Except that they may lie or "hallucinate".
> >
> > And I think our experience with the spam-fighting arms race has already
> answered that question in general: we can detect the crudest fakes, but
> better fakes will always get through.
> >
> > Fake generators have an inherent advantage, because fakes can be
> generated by the millions so cheaply, and the generators can be programmed
> to randomly try different techniques and learn which techniques get past
> the detectors.
> >
> > David Booth
> >
>
>
>

Received on Friday, 17 February 2023 17:03:58 UTC