Re: [EXT] personal AI (was: Meronymity)

Sure, I'm happy to receive github issues. But if people just want to copy
the general idea and go run with it in different contexts, that is also
fine with me. What I care about most is figuring out how to socialize the
idea of face-to-face interactions between ordinary humans being an untapped
but valuable source of trust.

On Tue, Apr 30, 2024 at 8:31 PM Golda Velez <gvelez17@gmail.com> wrote:

> thanks for pushing this forward in a structured and meaningful way Daniel
> - how do you want feedback, as github issues?  I will share this in my
> small circles
>
> On Mon, Apr 29, 2024 at 11:38 PM Daniel Hardman <daniel.hardman@gmail.com>
> wrote:
>
>> I want to acknowledge Adrian's concern. AI is yet another way that power
>> imbalances between individuals and institutions could be retrenched, and we
>> cannot allow institutions to impose "no AI" requirements on ordinary
>> individuals in unfair ways. I editorialized a while ago about big desks and
>> little people; I think we share the same concern.
>>
>> Having said that, I think it is crucial that we in the identity community
>> set a standard for clarity in our thinking about the relationship between
>> the identity of a human and the identity of a proxy for a human. Precision
>> will matter. Oskar's excellent point is an example.
>>
>> I created a schema for what I call "face-to-face" credentials, and I
>> invite everyone in the community to implement support for these or for
>> something like them. My writeup about the details is here:
>> https://github.com/provenant-dev/public-schema/blob/main/face-to-face/index.md
>> .
>>
>> The schema itself is published in JSON Schema format and could be
>> implemented by any credential technology, You will notice in 2 or 3 places
>> an assumption that ACDCs are in use, but that is only because of the way I
>> was trying to facilitate graduated disclosure and chaining, and is a bit
>> beside the point.
>>
>> On Tue, Apr 30, 2024 at 2:15 AM Drummond Reed <
>> Drummond.Reed@gendigital.com> wrote:
>>
>>> Harrison, I like your characterization of a human being able to treat an
>>> AI agent similar to a real estate agent or an attorney because it points
>>> how important it is that you, as the person interacting with the agent,
>>> know unambiguously whose interests the AI agent is representing.
>>>
>>>
>>>
>>> The key difference (as has already been pointed out in this thread) is
>>> that interacting with an AI agent may have completely different dynamics
>>> than interacting with a human agent precisely because it is not a human.
>>> So, the two tests for which I would want proof during an interaction:
>>>
>>>
>>>
>>>    1. Am I dealing at this particular moment in time with a human or an
>>>    AI agent?
>>>    2. In *either* case, whose interests does that human or AI agent
>>>    represent?
>>>
>>>
>>>
>>> =Drummond
>>>
>>>
>>>
>>> *From: *Harrison <harrison@spokeo.com>
>>> *Date: *Monday, April 29, 2024 at 10:01 AM
>>> *To: *Adrian Gropper <agropper@healthurl.com>
>>> *Cc: *Drummond Reed <Drummond.Reed@gendigital.com>, Daniel Hardman <
>>> daniel.hardman@gmail.com>, Manu Sporny <msporny@digitalbazaar.com>, W3C
>>> Credentials CG (Public List) <public-credentials@w3.org>, Golda Velez <
>>> gvelez17@gmail.com>
>>> *Subject: *Re: [EXT] personal AI (was: Meronymity)
>>>
>>> Couldn't we treat AI like an agent representing an individual or client
>>> (like a real estate agent or attorney)?  If so, then I think there are a
>>> lot of existing social norms in regards to how we treat and interact with
>>> agents.
>>>
>>>
>>>
>>> Thanks,
>>>
>>>
>>>
>>> *Harrison Tang*
>>> CEO
>>>
>>>  LinkedIn  <https://www.linkedin.com/company/spokeo/> •   Instagram
>>> <https://www.instagram.com/spokeo/> •   Youtube <https://bit.ly/2oh8YPv>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Apr 29, 2024 at 8:22 AM Adrian Gropper <agropper@healthurl.com>
>>> wrote:
>>>
>>> Two people have every right to interact without impersonation. That can
>>> be enforced through mutual trust and social norms. I think Daniel's point
>>> falls mostly in this category.
>>>
>>>
>>>
>>> The issue being raised by Golda and Drummond seems more directed to
>>> strangers where trust itself is impersonal and institutionally mediated. In
>>> those cases, I see no role for Proof of Humanity. I don't want any
>>> corporation to insist on my live attention as long as I'm accountable for
>>> the outcome. That's a violation of my right to free association and whether
>>> I delegate to my spouse or my bot is none of their concern as long as I
>>> remain legally accountable in either case. How to hold me legally
>>> accountable is a separate issue that has everything to do with biometrics.
>>>
>>>
>>>
>>> As for my conversations with human or AI delegates of the corporation,
>>> that's just a matter of branding.
>>>
>>>
>>>
>>> Adrian
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Apr 29, 2024 at 10:44 AM Drummond Reed <
>>> Drummond.Reed@gendigital.com> wrote:
>>>
>>> “I believe human beings have the right to know whether they are
>>> interacting with other human beings directly, or merely with a piece of
>>> technology that's doing another human's bidding and can pass the Turing
>>> test.”
>>>
>>>
>>>
>>> Well put, Daniel. That’s the essence of what I was trying to say
>>> earlier. I think this “right to know” becomes even more important when
>>> humans are dealing with AI that is acting on behalf of an organization.
>>> Firstly, because I believe that will be the most common case (we are
>>> frequently dealing with AI customer service chatbots representing
>>> organizations today and it drives me nuts when I can’t figure out when I’m
>>> talking to the AI and when I’m actually dealing with a human). Secondly,
>>> because knowing whose interest an AI represents—is it a person or an
>>> organization?—is crucial to addressing the rest of the concerns Daniel
>>> raises.
>>>
>>>
>>>
>>> =Drummond
>>>
>>>
>>>
>>> *From: *Daniel Hardman <daniel.hardman@gmail.com>
>>> *Date: *Monday, April 29, 2024 at 2:21 AM
>>> *To: *Adrian Gropper <agropper@healthurl.com>
>>> *Cc: *Drummond Reed <Drummond.Reed@gendigital.com>, Manu Sporny <
>>> msporny@digitalbazaar.com>, W3C Credentials CG (Public List) <
>>> public-credentials@w3.org>, Golda Velez <gvelez17@gmail.com>
>>> *Subject: *[EXT] personal AI (was: Meronymity)
>>>
>>> I feel like we are not yet pondering deeply enough how an AI alters the
>>> social texture of an interaction. What is an AI's social and emotional
>>> intelligence, not just its ability to get work done -- and what is the
>>> social and emotional intelligence of us ordinary humans, vis-a-vis these
>>> tools?
>>>
>>>
>>>
>>> Per se, an AI has no human rights and triggers no social obligations on
>>> the part of those who interact with it. If I hang up the phone on an AI, or
>>> never respond to their messages, I don't believe I am being rude. And an AI
>>> has no right to privacy, no right to a fair trial, cannot be the victim of
>>> doxxing, etc.
>>>
>>> However, associating an AI strongly with a human that it represents
>>> introduces a social quandry that has never existed before, which is how to
>>> impute rights to the AI because of its association with a human. True, the
>>> AI has no standing in the social contract that would lead one to respond to
>>> its messages -- but if that AI represents a real human being, it is in fact
>>> the human being we are ignoring, not just the AI that does the human's
>>> bidding.
>>>
>>>
>>>
>>> Is lying to an AI that does Alice's bidding ethically the same as lying
>>> to Alice herself? Would it depend on the degree and intent of the AI's
>>> empowerment? What if Alice terminates her relationship with the AI -- does
>>> the grievance stay with Alice or with the AI?
>>>
>>> If I am a therapist who happens to have a really fabulous AI that can
>>> conduct remote therapy sessions over chat, is it ethical for me to go on
>>> vacation and leave my AI to counsel people about their deepest personal
>>> sorrows and perplexities, without telling them -- even if they can't tell
>>> the difference?
>>>
>>>
>>> I believe human beings have the right to know whether they are
>>> interacting with other human beings directly, or merely with a piece of
>>> technology that's doing another human's bidding and can pass the Turing
>>> test. This allows interpersonal and social judgments that are crucial to
>>> how we get along with one another. I am excited about the good that AI can
>>> do, and about the prospect of personal AIs, but I am categorically opposed
>>> to hiding the difference between people and AIs. The difference is real,
>>> and it matters profoundly.
>>>
>>>
>>>
>>> Alan said:
>>> > Do we ask for proof of humanity of other software running on behalf of
>>> a person?  What if a personal AI carries out its task using an
>>> application?  Isn't the human who determines what the software, AI or
>>> otherwise, supposed to do the responsible party?
>>>
>>>
>>>
>>> Adrian said:
>>> >The group could not think of a single reason to make a distinction
>>> between me and an AI that I control as my delegate. To introduce such a
>>> "CAPTCHA on steroids" is to limit technological enhancement to corporations
>>> and "others". Will we treat personal technological enhancement the way we
>>> treat doping in sports? Who would benefit from imposing such a restriction
>>> on technological enhancement? How would we interpret the human right of
>>> Freedom of Association and Assembly (Article 20) to exclude open source
>>> communities creating open source personal AI that an individual can take
>>> responsibility for? Certifying the vendor, provenance, and training data of
>>> a personal AI seems like the last thing we would want to do. I hope what
>>> Drummond is suggesting applies to AI that is not transparent and controlled
>>> by an individual or a community of individuals in a transparent way. How do
>>> we see a world where two kinds of AI, personal and "certified" interact?
>>>
>>>
>>>
>>> Drummond said:
>>> > Manu has a good point. I have no problem interacting with an AI bot
>>> as long as I can be sure it’s an AI bot—and ideally if I can check its
>>> vendor, provenance, trained data sets, etc.
>>>
>>> Manu said:
>>> > Another interesting aspect here is that "the bots" are, probably
>>> within the next decade, going to legitimately exceed the level of
>>> expertise of 99.9% of the population on most subjects that could be
>>> discussed in an online forum. I, for one, welcome our new robot troll
>>> overlords. :P
>>>
>>>

Received on Tuesday, 30 April 2024 19:32:30 UTC