Re: [EXT] personal AI (was: Meronymity)

I just wanted to comment that this has turned into one of the most beautiful and inspiring threads on a decentralized identity list that I’ve seen in a long time. Once our focus turns to human relationships and what really matters to establish confidence, integrity, intimacy, and trust…

…it feels like our shared North Star starts shining much more brightly for all of us.

From: Golda Velez <gvelez17@gmail.com>
Date: Wednesday, May 1, 2024 at 12:39 PM
To: Daniel Hardman <daniel.hardman@gmail.com>
Cc: Joe Andrieu <joe@legreq.com>, Drummond Reed <Drummond.Reed@gendigital.com>, Harrison <harrison@spokeo.com>, Adrian Gropper <agropper@healthurl.com>, Manu Sporny <msporny@digitalbazaar.com>, W3C Credentials CG (Public List) <public-credentials@w3.org>
Subject: Re: [EXT] personal AI (was: Meronymity)
100% what Daniel just said.  Not everything is transactional, and even if it were, long term relationships with accountable entities is a hard requirement for risk.  all that 'color' we feel in relationships is representing something important and mathematical even if its not captured yet.  pretending it doesn't exist in our implementations will lead us the wrong way.  our goal should be to enable all the things humans want to do that may require identity in the broad sense, even if all we can do right now is a small subset of those, we shouldn't scope out the rest from digital identity if we are going to be interacting with each other digitally - we want to enable all the things.  i believe bell labs thought the phone would only be used for business...

I didn't have time to write this out as well, but just a vote for Daniel's thread being extremely relevent

On Wed, May 1, 2024 at 4:14 AM Daniel Hardman <daniel.hardman@gmail.com<mailto:daniel.hardman@gmail.com>> wrote:
I think what you quoted, Joe, was from Drummond, not me. :-)
Thank you for analyzing the proxy risk, described in Dave's paper, as one that applies to all digitally intermediated environments. I agree with that broad framing.

With respect to accountability, I think we are talking past each other.

If we conceive the goal of identity tech narrowly, as a mechanism for transactional accountability -- ESPECIALLY when the accountability is imagined to flow largely one-way, from individuals to institutions -- then the distinction between a human and a tool that the human uses is not particularly important. We've always used tools (pen and ink, phones, signet rings and wax, chops, ambassadors), and it's always been obvious that the user, not the tool, was the locus of accountability. (Accountability for orgs tends to flow through laws and to be weak and tardy, more's the pity.) AI as we know it today adds some complications, but I don't think it alters the fundamental calculus about humans and tools and legal postures.
But transactional accountability wasn't the focus of my comments. Rather, I was interested in the effect that AIs have on a different possible framing of identity tech, which is as a relationship tool. On the accountability level, the answer to my question about a therapist going on vacation and leaving an AI to interact with patients is obvious. Of course we would say the therapist is responsible for whatever the AI does in her or his absence. What's more interesting to me is to wonder how such behavior colors relationships. Part of relationship dynamics is empathy, which requires us to imagine how our actions affect another person. Striving for and achieving accurate answers to the "how it affects them" question is an ethical imperative, and is foundational to healthy relationships; its utter absence is the defining characteristic of sociopaths. So: if I am "relating" to another person, but that person is (without my knowledge) mediating the entire interaction through an AI that filters and transforms and edits what I say, am I really "relating"? Or has the relationship lost something important?
Tools always mediate to some degree; their mere use doesn't invalidate a relationship. But when the degree of mediation gets too high, and the asymmetry in understanding of that mediation is out of whack, we can have problems. Carrying on a long-distance romance using WhatsApp and Zoom is probably real relationship building, though it's hard. Having an AI that filters all the angry mail for a member of parliament, producing simple tallies of positive and negative sentiment and sending out form letter responses is probably not real relationship building, and it would be unethical to try to convince someone it was.

You could say that the something that gets lost in my extreme examples is "trust". But I think that's confusing cause and effect. The downstream consequence of the therapist getting an AI to pinch hit for them would probably be lost trust, followed by whatever consequences can be imposed or ensue naturally as a result. However, I think the *cause* of the lost trust would be something like an objectification of a person and a reduction of the relationship to a transaction. And the effect flows from that cause because we humans believe we have a right to relate to one another as humans, affecting one another to a greater or lesser degree by our words and actions. I also think that this harm predates any observable downstream effect, and that it matters deeply, whether or not anybody ever finds out. It is the right not to be exposed to this harm, which has everything to do with being human and little to do with the legal system, that I was focused on.


On Wed, May 1, 2024 at 2:46 AM Joe Andrieu <joe@legreq.com<mailto:joe@legreq.com>> wrote:
daniel.hardman@gmail.com<mailto:daniel.hardman@gmail.com> wrote:
So, the two tests for which I would want proof during an interaction:

1. Am I dealing at this particular moment in time with a human or an AI agent?
2. In either case, whose interests does that human or AI agent represent?

I think this can be reduced to a single question:
3. Who is legally liable for the actions of the other party in the interaction?

If you understand the liability, you'll understand their interests. And if you know who is liable, do you really care if it's an AI or a live human?

If I actually know it is a "live" human, the liable party would be that human (as a starting point: they may be able to shift liability to their corporation or public role).

If it's a synthetic entity, then the liable party would be the party who holds the legal liability for its actions. Right now, most AI efforts fail to acknowledge this necessary liability. SOMEONE, a legal entity, *will* be deemed responsible by the courts. It might be the operator, treating it like a drunk driver who causes an accident. Or it might be the manufacturer, treating it like the gross negligence of the Ford Pinto, whose exploding features were deemed Ford's problem. Right now, that's all up to case law.

Note: there is no way to answer #1 in any digitally intermediated environment if the user is complicit. See Dave Longley's paper https://ieeexplore.ieee.org/document/9031545 You can always proxy the digital challenges to the complicit party in real-time.

Without a common physical space within which your own sensors can observe the alleged human, you can't be certain the respondent isn't just proxying to the "real party".

What you can do, however, is "require" the other party to cryptographically sign an attestation of their legal accountability. It is reasonable to accept that both individuals and organizations can maintain keys and use those keys to secure interactions taken on their behalf. IMO, this is our best option moving forward.

The hard part is figuring out if it REALLY is a "live" person on the other side of a digital interaction. If what you want is to avoid the voice mail automation and deal with a real person... we don't really have a technical way to prevent that. We are at the point where customer service *will* be delivered by AI and eventually it will be indistinguishable from entry-level customer support agents. All we can do is provide evidence that the party on the other side has a known point of legal recourse.

If you do... I think the best you can get is either
(1) The other party could satisfy liveness at an in-person proof site and use a credential generated within a limited time, which would allow its user-agents to present the party as "living". Then, at least you know that the other party has proven liveliness recently (could be minutes, but realistically 'days' is a more likely granularity), and it is the recentness that gives you confidence it is still true. And you still have the complicit conspirator problem.

(2) The party on the other side of the interaction can satisfy a cryptographic challenge demonstrating their authority to act on behalf of a specific legal entity, whether human or not.

What you can't really do anymore is trust the non-cryptographic evidence: the video, the photo, the voice check. All of these sensor-based "liveness" checks are merely the front line pawns in an escalating arms race between AI deep fakes and AI deep fake detectors.

Unfortunately, AI liveness spoofing has already outstripped the detection capabilities of modern systems. Positive attestations signed by trustable parties is likely the only way through the purely digital use case.

IMO, the good news is that for most of the use cases where you might think "liveness" helps, especially wrt AI, it is usually not a matter of whether or not a party in an interaction is human, but rather on whose behalf is that entity acting?

In particular, we don't concern ourselves with the separation between our browser and us when it comes to legal liability when we use the web. What matters isn't whether or not an action by my user-agent might be perceived as "me" (it is). When I take actions through my user-agent (the browser), it is understood by everyone that I'm liable for the actions of the browser (unless I can prove some other actor misled me about functionality so badly that it constitutes a hack on my machine such as XSS attacks or bad actor extensions).

So, I'd caution about "liveness" and "humanness" checks. They are often the wrong framing for the social quandary. Instead, I'd recommend finding the right balance of accountability and liability for actions taken by our digital agents. Cryptographic identifiers provide a path forward for that.

-j

[Image removed by sender.]<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Virus-free.www.avg.com<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>

On Tue, Apr 30, 2024 at 12:33 PM Daniel Hardman <daniel.hardman@gmail.com<mailto:daniel.hardman@gmail.com>> wrote:
Sure, I'm happy to receive github issues. But if people just want to copy the general idea and go run with it in different contexts, that is also fine with me. What I care about most is figuring out how to socialize the idea of face-to-face interactions between ordinary humans being an untapped but valuable source of trust.

On Tue, Apr 30, 2024 at 8:31 PM Golda Velez <gvelez17@gmail.com<mailto:gvelez17@gmail.com>> wrote:
thanks for pushing this forward in a structured and meaningful way Daniel - how do you want feedback, as github issues?  I will share this in my small circles

On Mon, Apr 29, 2024 at 11:38 PM Daniel Hardman <daniel.hardman@gmail.com<mailto:daniel.hardman@gmail.com>> wrote:
I want to acknowledge Adrian's concern. AI is yet another way that power imbalances between individuals and institutions could be retrenched, and we cannot allow institutions to impose "no AI" requirements on ordinary individuals in unfair ways. I editorialized a while ago about big desks and little people; I think we share the same concern.
Having said that, I think it is crucial that we in the identity community set a standard for clarity in our thinking about the relationship between the identity of a human and the identity of a proxy for a human. Precision will matter. Oskar's excellent point is an example.
I created a schema for what I call "face-to-face" credentials, and I invite everyone in the community to implement support for these or for something like them. My writeup about the details is here: https://github.com/provenant-dev/public-schema/blob/main/face-to-face/index.md.


The schema itself is published in JSON Schema format and could be implemented by any credential technology, You will notice in 2 or 3 places an assumption that ACDCs are in use, but that is only because of the way I was trying to facilitate graduated disclosure and chaining, and is a bit beside the point.

On Tue, Apr 30, 2024 at 2:15 AM Drummond Reed <Drummond.Reed@gendigital.com<mailto:Drummond.Reed@gendigital.com>> wrote:
Harrison, I like your characterization of a human being able to treat an AI agent similar to a real estate agent or an attorney because it points how important it is that you, as the person interacting with the agent, know unambiguously whose interests the AI agent is representing.

The key difference (as has already been pointed out in this thread) is that interacting with an AI agent may have completely different dynamics than interacting with a human agent precisely because it is not a human. So, the two tests for which I would want proof during an interaction:

1.     Am I dealing at this particular moment in time with a human or an AI agent?
2.     In either case, whose interests does that human or AI agent represent?

=Drummond

From: Harrison <harrison@spokeo.com<mailto:harrison@spokeo.com>>
Date: Monday, April 29, 2024 at 10:01 AM
To: Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>>
Cc: Drummond Reed <Drummond.Reed@gendigital.com<mailto:Drummond.Reed@gendigital.com>>, Daniel Hardman <daniel.hardman@gmail.com<mailto:daniel.hardman@gmail.com>>, Manu Sporny <msporny@digitalbazaar.com<mailto:msporny@digitalbazaar.com>>, W3C Credentials CG (Public List) <public-credentials@w3.org<mailto:public-credentials@w3.org>>, Golda Velez <gvelez17@gmail.com<mailto:gvelez17@gmail.com>>
Subject: Re: [EXT] personal AI (was: Meronymity)
Couldn't we treat AI like an agent representing an individual or client (like a real estate agent or attorney)?  If so, then I think there are a lot of existing social norms in regards to how we treat and interact with agents.

Thanks,

[Image removed by sender.]
Harrison Tang
CEO
[Image removed by sender.] LinkedIn <https://www.linkedin.com/company/spokeo/>  •  [Image removed by sender.]  Instagram <https://www.instagram.com/spokeo/>  •  [Image removed by sender.]  Youtube<https://bit.ly/2oh8YPv>


On Mon, Apr 29, 2024 at 8:22 AM Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>> wrote:
Two people have every right to interact without impersonation. That can be enforced through mutual trust and social norms. I think Daniel's point falls mostly in this category.

The issue being raised by Golda and Drummond seems more directed to strangers where trust itself is impersonal and institutionally mediated. In those cases, I see no role for Proof of Humanity. I don't want any corporation to insist on my live attention as long as I'm accountable for the outcome. That's a violation of my right to free association and whether I delegate to my spouse or my bot is none of their concern as long as I remain legally accountable in either case. How to hold me legally accountable is a separate issue that has everything to do with biometrics.

As for my conversations with human or AI delegates of the corporation, that's just a matter of branding.

Adrian



On Mon, Apr 29, 2024 at 10:44 AM Drummond Reed <Drummond.Reed@gendigital.com<mailto:Drummond.Reed@gendigital.com>> wrote:
“I believe human beings have the right to know whether they are interacting with other human beings directly, or merely with a piece of technology that's doing another human's bidding and can pass the Turing test.”

Well put, Daniel. That’s the essence of what I was trying to say earlier. I think this “right to know” becomes even more important when humans are dealing with AI that is acting on behalf of an organization. Firstly, because I believe that will be the most common case (we are frequently dealing with AI customer service chatbots representing organizations today and it drives me nuts when I can’t figure out when I’m talking to the AI and when I’m actually dealing with a human). Secondly, because knowing whose interest an AI represents—is it a person or an organization?—is crucial to addressing the rest of the concerns Daniel raises.

=Drummond

From: Daniel Hardman <daniel.hardman@gmail.com<mailto:daniel.hardman@gmail.com>>
Date: Monday, April 29, 2024 at 2:21 AM
To: Adrian Gropper <agropper@healthurl.com<mailto:agropper@healthurl.com>>
Cc: Drummond Reed <Drummond.Reed@gendigital.com<mailto:Drummond.Reed@gendigital.com>>, Manu Sporny <msporny@digitalbazaar.com<mailto:msporny@digitalbazaar.com>>, W3C Credentials CG (Public List) <public-credentials@w3.org<mailto:public-credentials@w3.org>>, Golda Velez <gvelez17@gmail.com<mailto:gvelez17@gmail.com>>
Subject: [EXT] personal AI (was: Meronymity)
I feel like we are not yet pondering deeply enough how an AI alters the social texture of an interaction. What is an AI's social and emotional intelligence, not just its ability to get work done -- and what is the social and emotional intelligence of us ordinary humans, vis-a-vis these tools?

Per se, an AI has no human rights and triggers no social obligations on the part of those who interact with it. If I hang up the phone on an AI, or never respond to their messages, I don't believe I am being rude. And an AI has no right to privacy, no right to a fair trial, cannot be the victim of doxxing, etc.
However, associating an AI strongly with a human that it represents introduces a social quandry that has never existed before, which is how to impute rights to the AI because of its association with a human. True, the AI has no standing in the social contract that would lead one to respond to its messages -- but if that AI represents a real human being, it is in fact the human being we are ignoring, not just the AI that does the human's bidding.

Is lying to an AI that does Alice's bidding ethically the same as lying to Alice herself? Would it depend on the degree and intent of the AI's empowerment? What if Alice terminates her relationship with the AI -- does the grievance stay with Alice or with the AI?
If I am a therapist who happens to have a really fabulous AI that can conduct remote therapy sessions over chat, is it ethical for me to go on vacation and leave my AI to counsel people about their deepest personal sorrows and perplexities, without telling them -- even if they can't tell the difference?

I believe human beings have the right to know whether they are interacting with other human beings directly, or merely with a piece of technology that's doing another human's bidding and can pass the Turing test. This allows interpersonal and social judgments that are crucial to how we get along with one another. I am excited about the good that AI can do, and about the prospect of personal AIs, but I am categorically opposed to hiding the difference between people and AIs. The difference is real, and it matters profoundly.

Alan said:
> Do we ask for proof of humanity of other software running on behalf of a person?  What if a personal AI carries out its task using an application?  Isn't the human who determines what the software, AI or otherwise, supposed to do the responsible party?

Adrian said:
>The group could not think of a single reason to make a distinction between me and an AI that I control as my delegate. To introduce such a "CAPTCHA on steroids" is to limit technological enhancement to corporations and "others". Will we treat personal technological enhancement the way we treat doping in sports? Who would benefit from imposing such a restriction on technological enhancement? How would we interpret the human right of Freedom of Association and Assembly (Article 20) to exclude open source communities creating open source personal AI that an individual can take responsibility for? Certifying the vendor, provenance, and training data of a personal AI seems like the last thing we would want to do. I hope what Drummond is suggesting applies to AI that is not transparent and controlled by an individual or a community of individuals in a transparent way. How do we see a world where two kinds of AI, personal and "certified" interact?

Drummond said:
> Manu has a good point. I have no problem interacting with an AI bot as long as I can be sure it’s an AI bot—and ideally if I can check its vendor, provenance, trained data sets, etc.
Manu said:
> Another interesting aspect here is that "the bots" are, probably
within the next decade, going to legitimately exceed the level of
expertise of 99.9% of the population on most subjects that could be
discussed in an online forum. I, for one, welcome our new robot troll
overlords. :P

Received on Thursday, 2 May 2024 00:04:47 UTC