personal AI (was: Meronymity)

I feel like we are not yet pondering deeply enough how an AI alters the
social texture of an interaction. What is an AI's social and emotional
intelligence, not just its ability to get work done -- and what is the
social and emotional intelligence of us ordinary humans, vis-a-vis these
tools?

Per se, an AI has no human rights and triggers no social obligations on the
part of those who interact with it. If I hang up the phone on an AI, or
never respond to their messages, I don't believe I am being rude. And an AI
has no right to privacy, no right to a fair trial, cannot be the victim of
doxxing, etc.

However, associating an AI strongly with a human that it represents
introduces a social quandry that has never existed before, which is how to
impute rights to the AI because of its association with a human. True, the
AI has no standing in the social contract that would lead one to respond to
its messages -- but if that AI represents a real human being, it is in fact
the human being we are ignoring, not just the AI that does the human's
bidding.

Is lying to an AI that does Alice's bidding ethically the same as lying to
Alice herself? Would it depend on the degree and intent of the AI's
empowerment? What if Alice terminates her relationship with the AI -- does
the grievance stay with Alice or with the AI?

If I am a therapist who happens to have a really fabulous AI that can
conduct remote therapy sessions over chat, is it ethical for me to go on
vacation and leave my AI to counsel people about their deepest personal
sorrows and perplexities, without telling them -- even if they can't tell
the difference?

I believe human beings have the right to know whether they are interacting
with other human beings directly, or merely with a piece of technology
that's doing another human's bidding and can pass the Turing test. This
allows interpersonal and social judgments that are crucial to how we get
along with one another. I am excited about the good that AI can do, and
about the prospect of personal AIs, but I am categorically opposed to
hiding the difference between people and AIs. The difference is real, and
it matters profoundly.

Alan said:
> Do we ask for proof of humanity of other software running on behalf of a
person?  What if a personal AI carries out its task using an application?
Isn't the human who determines what the software, AI or otherwise, supposed
to do the responsible party?

Adrian said:
>The group could not think of a single reason to make a distinction between
me and an AI that I control as my delegate. To introduce such a "CAPTCHA on
steroids" is to limit technological enhancement to corporations and
"others". Will we treat personal technological enhancement the way we treat
doping in sports? Who would benefit from imposing such a restriction on
technological enhancement? How would we interpret the human right of
Freedom of Association and Assembly (Article 20) to exclude open source
communities creating open source personal AI that an individual can take
responsibility for? Certifying the vendor, provenance, and training data of
a personal AI seems like the last thing we would want to do. I hope what
Drummond is suggesting applies to AI that is not transparent and controlled
by an individual or a community of individuals in a transparent way. How do
we see a world where two kinds of AI, personal and "certified" interact?

Drummond said:
> Manu has a good point. I have no problem interacting with an AI bot as
long as I can be sure it’s an AI bot—and ideally if I can check its vendor,
provenance, trained data sets, etc.

Manu said:
> Another interesting aspect here is that "the bots" are, probably
within the next decade, going to legitimately exceed the level of
expertise of 99.9% of the population on most subjects that could be
discussed in an online forum. I, for one, welcome our new robot troll
overlords. :P

>

Received on Monday, 29 April 2024 09:21:18 UTC