Re: [EXT] personal AI (was: Meronymity)

Hi all,

This is a great conversation. This recent situation relating to AirCanada
leveraging AI in place of customer service agents came to mind and is an
interesting one:
https://www.forbes.com/sites/marisagarcia/2024/02/19/what-air-canada-lost-in-remarkable-lying-ai-chatbot-case

There is a layer of accountability to take into consideration.

Regards

On Mon, Apr 29, 2024 at 1:42 PM Alan Karp <alanhkarp@gmail.com> wrote:

> Or could we treat AIs the way we treat other software intermediaries?  Not
> generic things like email or document editors, but applications designed
> for a specific purpose.  Those applications have fewer opportunities to go
> off the rails than does an AI, but it's been known to happen.
>
> For example, some tax preparation software provides a guarantee of
> accuracy.  If it makes a mistake,you don't file a claim with the software;
> the company providing it is the responsible party.  Should it be any
> different if an AI CPA makes the same mistake?
>
> --------------
> Alan Karp
>
>
> On Mon, Apr 29, 2024 at 10:03 AM Harrison <harrison@spokeo.com> wrote:
>
>> Couldn't we treat AI like an agent representing an individual or client
>> (like a real estate agent or attorney)?  If so, then I think there are a
>> lot of existing social norms in regards to how we treat and interact with
>> agents.
>>
>> Thanks,
>>
>> *Harrison Tang*
>> CEO
>>  LinkedIn  <https://www.linkedin.com/company/spokeo/> •   Instagram
>> <https://www.instagram.com/spokeo/> •   Youtube <https://bit.ly/2oh8YPv>
>>
>>
>> On Mon, Apr 29, 2024 at 8:22 AM Adrian Gropper <agropper@healthurl.com>
>> wrote:
>>
>>> Two people have every right to interact without impersonation. That can
>>> be enforced through mutual trust and social norms. I think Daniel's point
>>> falls mostly in this category.
>>>
>>> The issue being raised by Golda and Drummond seems more directed to
>>> strangers where trust itself is impersonal and institutionally mediated. In
>>> those cases, I see no role for Proof of Humanity. I don't want any
>>> corporation to insist on my live attention as long as I'm accountable for
>>> the outcome. That's a violation of my right to free association and whether
>>> I delegate to my spouse or my bot is none of their concern as long as I
>>> remain legally accountable in either case. How to hold me legally
>>> accountable is a separate issue that has everything to do with biometrics.
>>>
>>> As for my conversations with human or AI delegates of the corporation,
>>> that's just a matter of branding.
>>>
>>> Adrian
>>>
>>>
>>>
>>> On Mon, Apr 29, 2024 at 10:44 AM Drummond Reed <
>>> Drummond.Reed@gendigital.com> wrote:
>>>
>>>> “I believe human beings have the right to know whether they are
>>>> interacting with other human beings directly, or merely with a piece of
>>>> technology that's doing another human's bidding and can pass the Turing
>>>> test.”
>>>>
>>>>
>>>>
>>>> Well put, Daniel. That’s the essence of what I was trying to say
>>>> earlier. I think this “right to know” becomes even more important when
>>>> humans are dealing with AI that is acting on behalf of an organization.
>>>> Firstly, because I believe that will be the most common case (we are
>>>> frequently dealing with AI customer service chatbots representing
>>>> organizations today and it drives me nuts when I can’t figure out when I’m
>>>> talking to the AI and when I’m actually dealing with a human). Secondly,
>>>> because knowing whose interest an AI represents—is it a person or an
>>>> organization?—is crucial to addressing the rest of the concerns Daniel
>>>> raises.
>>>>
>>>>
>>>>
>>>> =Drummond
>>>>
>>>>
>>>>
>>>> *From: *Daniel Hardman <daniel.hardman@gmail.com>
>>>> *Date: *Monday, April 29, 2024 at 2:21 AM
>>>> *To: *Adrian Gropper <agropper@healthurl.com>
>>>> *Cc: *Drummond Reed <Drummond.Reed@gendigital.com>, Manu Sporny <
>>>> msporny@digitalbazaar.com>, W3C Credentials CG (Public List) <
>>>> public-credentials@w3.org>, Golda Velez <gvelez17@gmail.com>
>>>> *Subject: *[EXT] personal AI (was: Meronymity)
>>>>
>>>> I feel like we are not yet pondering deeply enough how an AI alters the
>>>> social texture of an interaction. What is an AI's social and emotional
>>>> intelligence, not just its ability to get work done -- and what is the
>>>> social and emotional intelligence of us ordinary humans, vis-a-vis these
>>>> tools?
>>>>
>>>>
>>>>
>>>> Per se, an AI has no human rights and triggers no social obligations on
>>>> the part of those who interact with it. If I hang up the phone on an AI, or
>>>> never respond to their messages, I don't believe I am being rude. And an AI
>>>> has no right to privacy, no right to a fair trial, cannot be the victim of
>>>> doxxing, etc.
>>>>
>>>> However, associating an AI strongly with a human that it represents
>>>> introduces a social quandry that has never existed before, which is how to
>>>> impute rights to the AI because of its association with a human. True, the
>>>> AI has no standing in the social contract that would lead one to respond to
>>>> its messages -- but if that AI represents a real human being, it is in fact
>>>> the human being we are ignoring, not just the AI that does the human's
>>>> bidding.
>>>>
>>>>
>>>>
>>>> Is lying to an AI that does Alice's bidding ethically the same as lying
>>>> to Alice herself? Would it depend on the degree and intent of the AI's
>>>> empowerment? What if Alice terminates her relationship with the AI -- does
>>>> the grievance stay with Alice or with the AI?
>>>>
>>>> If I am a therapist who happens to have a really fabulous AI that can
>>>> conduct remote therapy sessions over chat, is it ethical for me to go on
>>>> vacation and leave my AI to counsel people about their deepest personal
>>>> sorrows and perplexities, without telling them -- even if they can't tell
>>>> the difference?
>>>>
>>>>
>>>> I believe human beings have the right to know whether they are
>>>> interacting with other human beings directly, or merely with a piece of
>>>> technology that's doing another human's bidding and can pass the Turing
>>>> test. This allows interpersonal and social judgments that are crucial to
>>>> how we get along with one another. I am excited about the good that AI can
>>>> do, and about the prospect of personal AIs, but I am categorically opposed
>>>> to hiding the difference between people and AIs. The difference is real,
>>>> and it matters profoundly.
>>>>
>>>>
>>>>
>>>> Alan said:
>>>> > Do we ask for proof of humanity of other software running on behalf
>>>> of a person?  What if a personal AI carries out its task using an
>>>> application?  Isn't the human who determines what the software, AI or
>>>> otherwise, supposed to do the responsible party?
>>>>
>>>>
>>>>
>>>> Adrian said:
>>>> >The group could not think of a single reason to make a distinction
>>>> between me and an AI that I control as my delegate. To introduce such a
>>>> "CAPTCHA on steroids" is to limit technological enhancement to corporations
>>>> and "others". Will we treat personal technological enhancement the way we
>>>> treat doping in sports? Who would benefit from imposing such a restriction
>>>> on technological enhancement? How would we interpret the human right of
>>>> Freedom of Association and Assembly (Article 20) to exclude open source
>>>> communities creating open source personal AI that an individual can take
>>>> responsibility for? Certifying the vendor, provenance, and training data of
>>>> a personal AI seems like the last thing we would want to do. I hope what
>>>> Drummond is suggesting applies to AI that is not transparent and controlled
>>>> by an individual or a community of individuals in a transparent way. How do
>>>> we see a world where two kinds of AI, personal and "certified" interact?
>>>>
>>>>
>>>>
>>>> Drummond said:
>>>> > Manu has a good point. I have no problem interacting with an AI bot
>>>> as long as I can be sure it’s an AI bot—and ideally if I can check its
>>>> vendor, provenance, trained data sets, etc.
>>>>
>>>> Manu said:
>>>> > Another interesting aspect here is that "the bots" are, probably
>>>> within the next decade, going to legitimately exceed the level of
>>>> expertise of 99.9% of the population on most subjects that could be
>>>> discussed in an online forum. I, for one, welcome our new robot troll
>>>> overlords. :P
>>>>
>>>

Received on Monday, 29 April 2024 19:22:28 UTC