Re: AI catfishing [was Re: ChatGPT and ontologies]

Actually Chaals  and all

*....   including passing AI off as human work*
I have seen humans passing themselves for AI

At a well known conference in HKong a few years back, and some young folks
presented their
incredible talking head, which was supposedly a robotic head capable of
intelligent conversation
The talking head was asked questions (by its develoipers) and it replied
elegantly, knowledgeably, thoroughly
Like a well trained scholar  - I had the impression it could be easily
staged (but how?)
Basically, there was a live human answering through a synthetic voice
I hear many robotic companies are using real humans in the AI brain to get
investors to fund them

Just to say that some regulation can help to guide good practice but there
are more foundational issues in AI relating to human nature




On Mon, Feb 13, 2023 at 4:49 PM Chaals Nevile <chaals@fastmail.fm> wrote:

> On Monday, 13 February 2023 07:13:57 (+01:00), Paola Di Maio wrote:
>
> Can we demand/expect that AI adheres to better standards than humans?
>
>
> Sure. And we should. What's the point of working on something that's
> *less* reliable than me as a source of information?
>
> (We insist that kids don't tell lies, even though many of us feed them
> nonsense about tooth fairies, mice that leave money, a fat man whose
> reindeer pull an impossibly-loaded sleigh through the sky at incredible
> speed, etc etc...).
>
> We can expect humans to keep cheating, including passing AI off as human
> work and therefore claiming it need not adhere to good standards of
> behaviour.
>
> But setting "moral" standards we aspire to is a good first step in them
> actually being standards that we live by, and build by.
>
> cheers
>
> --
> Chaals Nevile
> Using Fastmail - it's worth it
>

Received on Monday, 13 February 2023 09:42:57 UTC