Re: AI catfishing [was Re: ChatGPT and ontologies]

On 2/17/2023 8:36 AM, Hugh Glaser wrote:
> [Sorry, you may see this twice if the moderator forwards the previous version from the wrong email address.]
> 
> Has anyone tried using LLMs such as GPT-3 to find out if text is human- or machine-generated?
> Can’t you just give it the text and ask it?

Except that they may lie or "hallucinate".

> And with Spam and Phishing too, of course.
> With effective detection technology, the generation becomes much less valuable.
> I assume people have, so I am wondering if people here know the outcomes.
> 
> The only thing I have used OpenAI for is very close - to categorise documents in archives, and do other knowledge extraction on them, such as authors, subject topics, places and dates.
> 
> Waaay back in time, when comparison websites started up, I thought that maybe people would get their personal assistants using knowledge technologies, that would actually compete with each other on how good they were.
> Of course, you would have had to pay for the PA, and the more you paid the better your PA, I guessed.
> I was wrong, I suppose because at scale, revenue from advertising will always out-buy what users are willing to pay in subscriptions.
> 
> But would people (and many agencies) be willing to pay for the services of a system that reliably told them when they were getting particular types of documents, and even what was accurate and inaccurate about them?
> 
> It could fund a really useful arms race between AI document creation and reception technology.
> 
>> On 10 Feb 2023, at 18:01, David Booth <david@dbooth.org> wrote:
>>
>> On 2/9/23 06:43, Dave Reynolds wrote:
>>> . . .
>>> https://www.epimorphics.com/writing-ontologies-with-chatgpt/
>>
>> Nice post!  I agree with the potential usefulness (and limitations) that you observe.  But I cannot shake my overriding concern that AI like ChatGPT will be abused as a spammer's or parasitic website owner's dream, to flood the web with 1000x more plausible-sounding-but-misleading-or-wrong crap than it already has, thus making it even more difficult to find the nuggets of reliable information.  AI is a major force multiplier.  As with any other force multiplier, it can be used for good or bad.
>>
>> I personally think we need legislation against AI catfishing, i.e., AI *pretending* to be human.
>>
>> - AI-generated content should be clearly labeled as such.
>>
>> - Bots should be clearly labeled as such.
>>
>> Thanks,
>> David Booth
>>
>>
> 
> 

Received on Friday, 17 February 2023 13:54:27 UTC