Re: AI catfishing [was Re: ChatGPT and ontologies]

On 2/17/2023 11:34 AM, Hugh Glaser wrote:
> I disagree, David.
> The Spam-fighting arms race is an example of huge success on the part of the defenders.
> I see an irony that you sent this message reliably to everyone, using email.
> You must remember when people were saying that email would soon be unusable because of spam.
> The attitude to LLMs spoofing as human and destroying our socio-technical fabric sounds very similar.
> 
> I have at least 5 main email addresses I regularly use, and make no attempt whatsoever to hide any from the world - I completely rely on different spam filters on different providers to make it useable.
> IIRC even years ago, over 99% of the incoming email to our university server was spam, and yet I would rarely see one, and still don’t from there.
> Yes, a few good fakes get through - but the number make us forget the overall success of the spam-fighting enterprise.
> Astonishing success, I would say.
> 
> And as for lying, Thomas, why do you think I would have a problem with something I am paying for deliberately lying (if I understand what you mean by “lying”)?
> I mean yes, hallucination and other stuff are a problem, but that is exactly the interesting stuff to investigate.

I suppose it depends on what you want to get out of it.  If you actually 
want to find out if a particular document was written by ChatGPT,say, 
that's one thing.  If you want to find out - because that's fun or 
interesting or research - what it will tell you when asked if the 
document was written by a chatbot, that's something else.

> 
> Let’s be Pollyanna rather than Cassandra :-)
> 
> Cheers
> 
>> On 17 Feb 2023, at 15:55, David Booth <david@dbooth.org> wrote:
>>
>> On 2/17/23 08:54, Thomas Passin wrote:
>>> On 2/17/2023 8:36 AM, Hugh Glaser wrote:
>>>> Has anyone tried using LLMs such as GPT-3 to find out if text is human- or machine-generated?
>>>> Can’t you just give it the text and ask it?
>>> Except that they may lie or "hallucinate".
>>
>> And I think our experience with the spam-fighting arms race has already answered that question in general: we can detect the crudest fakes, but better fakes will always get through.
>>
>> Fake generators have an inherent advantage, because fakes can be generated by the millions so cheaply, and the generators can be programmed to randomly try different techniques and learn which techniques get past the detectors.
>>
>> David Booth
>>
> 
> 

Received on Friday, 17 February 2023 17:21:30 UTC