Re: [EXT] personal AI (was: Meronymity)

A real-estate agent or lawyer can be sued (cf current affairs in USA). An AI cannot. Some of you may recall the Party-Actor model by my TNO colleague Rieks Joosten. The former two can be Party. The latter is Actor at best.

Oskar

________________________________
From: Drummond Reed <Drummond.Reed@gendigital.com>
Sent: Tuesday, April 30, 2024 3:21:28 am
To: Harrison <harrison@spokeo.com>; Adrian Gropper <agropper@healthurl.com>
Cc: Daniel Hardman <daniel.hardman@gmail.com>; Manu Sporny <msporny@digitalbazaar.com>; W3C Credentials CG (Public List) <public-credentials@w3.org>; Golda Velez <gvelez17@gmail.com>
Subject: [Potential Malicious Mail]Re: [EXT] personal AI (was: Meronymity)

Harrison, I like your characterization of a human being able to treat an AI agent similar to a real estate agent or an attorney because it points how important it is that you, as the person interacting with the agent, know unambiguously whose interests the AI agent is representing.

The key difference (as has already been pointed out in this thread) is that interacting with an AI agent may have completely different dynamics than interacting with a human agent precisely because it is not a human. So, the two tests for which I would want proof during an interaction:


  1.  Am I dealing at this particular moment in time with a human or an AI agent?
  2.  In either case, whose interests does that human or AI agent represent?

=Drummond

From: Harrison <harrison@spokeo.com>
Date: Monday, April 29, 2024 at 10:01 AM
To: Adrian Gropper <agropper@healthurl.com>
Cc: Drummond Reed <Drummond.Reed@gendigital.com>, Daniel Hardman <daniel.hardman@gmail.com>, Manu Sporny <msporny@digitalbazaar.com>, W3C Credentials CG (Public List) <public-credentials@w3.org>, Golda Velez <gvelez17@gmail.com>
Subject: Re: [EXT] personal AI (was: Meronymity)
Couldn't we treat AI like an agent representing an individual or client (like a real estate agent or attorney)?  If so, then I think there are a lot of existing social norms in regards to how we treat and interact with agents.

Thanks,

[https://ci3.googleusercontent.com/mail-sig/AIorK4zbpOgJ2VNqSsQ0g_Q1-rSSQqnvaqW5IWx34tbIk3bje1CCz2c0P-9anaFwV2mD7Id2hXK9W_M]
Harrison Tang
CEO
[https://ci3.googleusercontent.com/mail-sig/AIorK4xOihysBkakpnNCV83lh5k-BA2nIdtnxjRf9OB1QpTR5DgL4DVZw9h42WORI1y1u3k-mET9llU] LinkedIn <https://www.linkedin.com/company/spokeo/>  •  [https://ci3.googleusercontent.com/mail-sig/AIorK4zJfa-OdgUgkoPHyeRnI_fsi4ggb2WAeUSgahHaYBdpNeHGDQ6FufGadPTmg4mD48alQ0B9hBY]  Instagram <https://www.instagram.com/spokeo/>  •  [https://ci3.googleusercontent.com/mail-sig/AIorK4zq91O6GESEVuzRSXj2X19kEjGocCNPO5VJ2HDdvdCmYWNSyIM0wVlTjsM8qsxJ4uZdPDxJ-9Y]  Youtube<https://bit.ly/2oh8YPv>


On Mon, Apr 29, 2024 at 8:22 AM Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote:
Two people have every right to interact without impersonation. That can be enforced through mutual trust and social norms. I think Daniel's point falls mostly in this category.

The issue being raised by Golda and Drummond seems more directed to strangers where trust itself is impersonal and institutionally mediated. In those cases, I see no role for Proof of Humanity. I don't want any corporation to insist on my live attention as long as I'm accountable for the outcome. That's a violation of my right to free association and whether I delegate to my spouse or my bot is none of their concern as long as I remain legally accountable in either case. How to hold me legally accountable is a separate issue that has everything to do with biometrics.

As for my conversations with human or AI delegates of the corporation, that's just a matter of branding.

Adrian



On Mon, Apr 29, 2024 at 10:44 AM Drummond Reed <Drummond.Reed@gendigital.com<mailto:Drummond.Reed@gendigital.com>> wrote:
“I believe human beings have the right to know whether they are interacting with other human beings directly, or merely with a piece of technology that's doing another human's bidding and can pass the Turing test.”

Well put, Daniel. That’s the essence of what I was trying to say earlier. I think this “right to know” becomes even more important when humans are dealing with AI that is acting on behalf of an organization. Firstly, because I believe that will be the most common case (we are frequently dealing with AI customer service chatbots representing organizations today and it drives me nuts when I can’t figure out when I’m talking to the AI and when I’m actually dealing with a human). Secondly, because knowing whose interest an AI represents—is it a person or an organization?—is crucial to addressing the rest of the concerns Daniel raises.

=Drummond

From: Daniel Hardman <daniel.hardman@gmail.com<mailto:daniel.hardman@gmail.com>>
Date: Monday, April 29, 2024 at 2:21 AM
To: Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>>
Cc: Drummond Reed <Drummond.Reed@gendigital.com<mailto:Drummond.Reed@gendigital.com>>, Manu Sporny <msporny@digitalbazaar.com<mailto:msporny@digitalbazaar.com>>, W3C Credentials CG (Public List) <public-credentials@w3.org<mailto:public-credentials@w3.org>>, Golda Velez <gvelez17@gmail.com<mailto:gvelez17@gmail.com>>
Subject: [EXT] personal AI (was: Meronymity)
I feel like we are not yet pondering deeply enough how an AI alters the social texture of an interaction. What is an AI's social and emotional intelligence, not just its ability to get work done -- and what is the social and emotional intelligence of us ordinary humans, vis-a-vis these tools?

Per se, an AI has no human rights and triggers no social obligations on the part of those who interact with it. If I hang up the phone on an AI, or never respond to their messages, I don't believe I am being rude. And an AI has no right to privacy, no right to a fair trial, cannot be the victim of doxxing, etc.
However, associating an AI strongly with a human that it represents introduces a social quandry that has never existed before, which is how to impute rights to the AI because of its association with a human. True, the AI has no standing in the social contract that would lead one to respond to its messages -- but if that AI represents a real human being, it is in fact the human being we are ignoring, not just the AI that does the human's bidding.

Is lying to an AI that does Alice's bidding ethically the same as lying to Alice herself? Would it depend on the degree and intent of the AI's empowerment? What if Alice terminates her relationship with the AI -- does the grievance stay with Alice or with the AI?
If I am a therapist who happens to have a really fabulous AI that can conduct remote therapy sessions over chat, is it ethical for me to go on vacation and leave my AI to counsel people about their deepest personal sorrows and perplexities, without telling them -- even if they can't tell the difference?

I believe human beings have the right to know whether they are interacting with other human beings directly, or merely with a piece of technology that's doing another human's bidding and can pass the Turing test. This allows interpersonal and social judgments that are crucial to how we get along with one another. I am excited about the good that AI can do, and about the prospect of personal AIs, but I am categorically opposed to hiding the difference between people and AIs. The difference is real, and it matters profoundly.

Alan said:
> Do we ask for proof of humanity of other software running on behalf of a person?  What if a personal AI carries out its task using an application?  Isn't the human who determines what the software, AI or otherwise, supposed to do the responsible party?

Adrian said:
>The group could not think of a single reason to make a distinction between me and an AI that I control as my delegate. To introduce such a "CAPTCHA on steroids" is to limit technological enhancement to corporations and "others". Will we treat personal technological enhancement the way we treat doping in sports? Who would benefit from imposing such a restriction on technological enhancement? How would we interpret the human right of Freedom of Association and Assembly (Article 20) to exclude open source communities creating open source personal AI that an individual can take responsibility for? Certifying the vendor, provenance, and training data of a personal AI seems like the last thing we would want to do. I hope what Drummond is suggesting applies to AI that is not transparent and controlled by an individual or a community of individuals in a transparent way. How do we see a world where two kinds of AI, personal and "certified" interact?

Drummond said:
> Manu has a good point. I have no problem interacting with an AI bot as long as I can be sure it’s an AI bot—and ideally if I can check its vendor, provenance, trained data sets, etc.
Manu said:
> Another interesting aspect here is that "the bots" are, probably
within the next decade, going to legitimately exceed the level of
expertise of 99.9% of the population on most subjects that could be
discussed in an online forum. I, for one, welcome our new robot troll
overlords. :P

This message may contain information that is not intended for you. If you are not the addressee or if this message was sent to you by mistake, you are requested to inform the sender and delete the message. TNO accepts no liability for the content of this e-mail, for the manner in which you use it and for damage of any kind resulting from the risks inherent to the electronic transmission of messages.

Received on Tuesday, 30 April 2024 06:37:41 UTC