Re: Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online

> On 19 Aug 2024, at 16:02, Deventer, M.O. (Oskar) van <oskar.vandeventer@tno.nl> wrote:
> 
> Interesting discussion!
>  Any “personhood credential” (PHC) requires a form of authentication, correct? That is, at least from the person towards their (local) application that presents the PHC credential, correct?
>  An attack vector is that an AI maliciously presents a PHC. An implementation of this attack vector is a “PHC farm”. In a “PHC farm”, colluding humans support their malicious AI overlord by providing continuous authentication for PHC. That is, a modernized form of a “CAPTCHA farm”, for which humans are no longer needed.


Great point. I had been thinking of a similar threat for several days/weeks.

Let’s divide the steps:

* Issuance: One crucial step is to issue the Credential to a Human, which necessitates a specific Assurance level. Let’s consider the highest one (e.g., a specific person goes to the police station to be verified, a process significantly distinct from other non-state organizations proving the person associated with issuing is correct). 

* Presentation: Another crucial step is when the Credentials (issued “correctly”) are presented in two cases:
 - Credential installed on a device in a farm
 - Credential installed on a device with an Agent (call them Malware/Spyware/Testing, etc..., this is the case I was thinking), but Agent is more generic and also covers the case of the farm :)

Considerations:
* Issuing: Digital credentials can help solve the issuing phase, but only if proper governance and human trust are addressed. For example, I was reading about some implementations that require us to go to the police station, which is "uncomfortable but right” if we consider how physical credentials are issued.
* Presentation: The second point falls in the usual Threats, and as you said, anti-automation problems on different layers. 


Example from the court: Normally, when you do a forensic analysis, it's helpful to prove that those PCs/credentials were used for that suspicious activity; on the other hand, you have to prove that it was that specific person who was using those PCs/credentials at that time, which is a not insignificant logical step.

I am happy to discuss it and find a common solution, although we all need to be aware of the specific problems we are solving and those that are out-of-scope instead.

@Deventer: if you like, feel free to add a note on the Threat Model 

https://github.com/w3c-cg/threat-modeling/blob/main/models/decentralized-identities.md

Simone


>  How can we defend against this attack vector, where humans collude with AI?
>     • Should we limit PHC presentation to e.g. two per hour and ten per day? How can this be enforced technically? How effectively would this reduce the attacks?
>     • How can a PHC-presentation application assure its integrity about not colluding with a malicious AI?
>     • Are there other ways to make this attack vector difficult or expensive?
>  Oskar
>   From: Daniel Hardman <daniel.hardman@gmail.com>
> Sent: vrijdag 16 augustus 2024 16:30
> To: Manu Sporny <msporny@digitalbazaar.com>
> Cc: W3C Credentials CG <public-credentials@w3.org>
> Subject: Re: Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online
>  It's clear that we need credentials somewhat like the ones proposed in this paper. It's a promising direction. Thank you for raising the issue, Manu and colleagues.
> 
> However, I believe this paper invites two thinking errors that need to be corrected before we'll get an optimal outcome.
>  1. It conflates attesting or proving personhood with attesting or proving at least a minimal form of identity.
> 2. It implies that the attesters of personhood should be institutions.
>  Regarding #1, the paper proposes as a first fundamental requirement: "The issuer of a PHC gives at most one credential to an eligible person." This is immediately followed by a second requirement for unlinkable pseudonymity, Unlinkable pseudonymity means that a given verifier can prove the same party is present in more than one context, without knowing anything else about them. This is a GREAT characteristic (one I've long advocated). However, the "service-specific pseudonym" mentioned by the paper is an identifier that provides equality of reference. This is identity. It's just not an identity that can be correlated across verifiers or to arbitrary, externally known characteristics.
>  I get the desire to combat fraud, and I further get that there is massive fraud related to legal identity. However, we should never conflate personhood and identity -- ESPECIALLY legal identity. Imagine a life that dramatizes these differences: a transgender person is adopted as a child, gets married, suffers a terrible accident that disfigures their body, gets bionic limbs and an artificial heart, gets divorced, crosses a national border without papers, changes their name, sees their new country dissolve and reform through a civil war, and eventually develops Alzheimers. They may experience many transformations of their identity (legal and otherwise) in their lifetime. Identity is about sameness (equality of reference), and such a life is filled with transitions where sameness could be interrupted. In mathematics, we say "x = x" is an "identity", and "x != y" is not. However, this narrative is steady-state with respect to personhood, because personhood is about membership in the category or set of human beings: "x ∈ {humans}". An unidentifiable body in a mass grave is still a member of the set of human beings. A trafficked infant that is thought to be dead by its parents and police, and that is treated inhumanely by captors, is still a person. A criminal who has fake passports from 100 different countries is still a person, even if they're committing fraud. Whether a person can convince you to make judgements about how their personhood relates to one or more legal identities is an important question -- but it is NOT a personhood question. Legal identities derive from personhood, NOT the other way around.
>  You might say, "Ah, but we're talking about legal persons", to which I would reply, the Whanganui River in New Zealand is not the kind of person that this paper proposes to prove, even if it has legal personhood status.
>  A fundamental test of whether this distinction matters is to ask what privileges the credential is supposed to support. In my opinion, a personhood credential ought to support the right to be treated like a human being, NOT the right to vote exactly once in an election, the right to do business, etc. You have human rights by virtue of your membership in the set of human beings, and on no other basis. An unnamed and unidentified AI that pretends to be human, eliciting human treatment by other humans, is committing personhood fraud, and should be stopped by a personhood credential. An AI that assumes the identity of Alice, eliciting treatment appropriate to Alice (even pseudonymous and unlinkable) but not to Bob (even pseudonymous and unlinkable), is committing identity fraud, and should be stopped by an identity credential. These are different problems.
>  If we really want to build the concept that this paper advocates, I suggest that a better name might be "human identity credential", correctly conveying its deep reliance on identity assumptions. There's nothing wrong with such credentials; in fact, they could be awesome. However, we must not confuse or misname their meaning.
>  Regarding my point #2, the paper contemplates credentials rooted in social graphs or biometrics. That's promising. However, it strongly implies that whatever the root, the issuer role should/will/must be played by institutions. The key graphic shows an issuer as a big building. The paper says, "There are many effective ways to design a PHC system, and various organizations—governmental or otherwise—can serve as issuers. In one possible implementation, states could offer a PHC to any holder of their state’s tax identification number." That's intensely ironic, IMO. Do we really want faceless organizations -- governmental or otherwise -- to be the arbiters of personhood after a process known as "enrollment"? This seems like a recipe for an Orwellian version of the big desks, little people problem that I've previously written about. And it gets more likely, and worse, if we also frame the only or at least primary consumers of these credentials as "service providers" on the internet: "When a holder uses their personhood credential, they prove to the service provider" ... "PHCs let a user interact with services" ... "Prevent coordinated attacks of bots circumventing platforms’ rules" ... "services that adopt PHCs" ... "there are many ways a site could incorporate PHCs".
>  An alternative would be to define personhood credentials in terms of pure personhood, and to carefully exclude any notion of identity from them. Then issue them, one human being to another, any time we interact with another human being in real life. Instead of limiting how many personhood credentials a person has, we proliferate them on autopilot. Did you board a plane and sit next to someone, or take a drink from the flight attendant? Your phone gives their phone a credential about your face-to-face interaction. Did you ride on the subway with 100 strangers? Same thing. Did you attend IIW together? Same thing. You don't have to know and care about these people's identities, but you are convinced of their personhood. And with some humans, you issue personhood credentials with a stronger basis: I worked with this person for 4 years. I play in the orchestra with her. He taught my scuba class. This is a kind of personhood trust that already exists as an emergent benefit of human society. It is free and ubiquitous and egalitarian and highly decentralized. I tried to describe one possible implementation of such credentials here. That impl may not be interesting to many here, since it's rooted in KERI, but the general principles are portable to any cred technology.
>  I see 2 basic problems with what I've just proposed. The first is that in all the building of VC technology that we've done, we've paid almost no attention to the need for ordinary people to be issuers (or verifiers). We continue to build systems that satisfy the trust needs of institutions, via institutions. Big desks, little people. The second is that the trust you could achieve under a face-to-face issuance model is fuzzy, not crisp. If I don't know the supposed "person" that claims to have had a face-to-face interaction with a remote party who is offering their face-to-face cred to me as proof of personhood, why should I assume the veracity of their assertion?
>  I think this is why we keep assuming that organizations are the issuers: in order to make trust decisions crisper, we have to organize human behavior. We think that means... organizations. Organizations are the only ones organized enough to pay for this, or to lay down and stick to requirements, or to even decide they want this, right? Organizations are the only mechanism that lets us put reputation at risk in a crisp way, right? 
>  I think it might be possible to coalesce human behavior with only a tenuous dependency on the construct we call an "organization". Suppose we had the ubiquitous face-to-face credentials I posited. Suppose we also had tech that could answer this question: "Do I have a reason to trust the issuers of any of the 10,000 personhood credentials this remote entity is willing to present?" And suppose that if the answer was "no", the tech could reach out to our network and evaluate degrees of separation (we wouldn't need 6 if we issued these as promiscuously as we share COVID). If we add to this a calculation of gravitas according to how many "yes" answers we can get, and another calculation of quality according to the nature of the face-to-face interaction people are claiming, and another factoring in of collateral upvotes/downvotes to catch liars, I think we'd be getting pretty close to what we want.
>  All of which still sounds fuzzy. We so, so want to just say, "I want the government to tell me that X is a person," because it makes our verification decisions so easy. I get it. So here's the compromise position: a government designs and publishes some rules (gov fw), trains its employees, and has its employees issue proof of personhood. But unlike other gvt creds, the issuers of personhood creds are persons, NOT the government as an org. The person issuers are credentialed by the government on questions for which the government is actually authoritative -- completion of a training class, taking an oath to a constitution, access to certain information and technology, maybe... Now in my imaginary future, our first question, "Do I have a reason to trust the issuers of any of the 10,000 personhood credentials..." gets analyzed differently. All the issuers are still people, but some of those people are known to have public reputations with accountability, as evidenced by their government credentials. This accomplishes almost exactly the same thing as what the paper posits, but it keeps institutions out of the personhood business. They don't belong there.
>  My two cents.
>  On Fri, Aug 16, 2024 at 3:26 AM Manu Sporny <msporny@digitalbazaar.com> wrote:
> Hey CCG'ers,
> 
> I'm thrilled to announce a new research paper that's been in the
> making for many months now about Personhood Credentials (PHCs),
> artificial intelligence, and the value of privacy-preserving solutions
> to online disinformation. A quick excerpt from the executive summary
> of the paper:
> 
> Malicious actors have long used misleading identities to deceive
> others online. They carry out fraud, cyberattacks, and disinformation
> campaigns from multiple online aliases, email addresses, and phone
> numbers. Historically, such deception has sometimes seemed an
> unfortunate but necessary cost of preserving the Internet’s
> commitments to privacy and unrestricted access. But highly capable AI
> systems may change the landscape: There is a substantial risk that,
> without further mitigations, deceptive AI-powered activity could
> overwhelm the Internet. To uphold user privacy while protecting
> against AI-powered deception, new countermeasures are needed.
> 
> A few of us from this community (KimHD, WayneC, WendyS, HeatherF) have
> been working with researchers from OpenAI, Harvard, MIT, Oxford,
> Microsoft, OpenMined, Berkman Klein, and 20+ other organizations
> involved in frontier Artificial Intelligence to determine how we (the
> digital credentials community) might address some of the more
> concerning aspects of how AI systems will interact with the Web and
> the Internet, but in a way that will continue to protect individual
> privacy and civil liberties that remain at the foundation of the Web
> we want.
> 
> A huge shout out to Steven Adler, Zoë Hitzig, and Shrey Jain who led
> this work and put together an amazing group of people to work with --
> it was a pleasure and honor to work with them as they did the
> lionshare of the cat herding and drafting, re-drafting, and
> re-re-re-re-drafting of the paper. It's rare to be a part of such a
> high energy and velocity collaboration, so thanks to each of them for
> making this happen!
> 
> For those of you that are on social media, Steven has done a great
> visual summary of the paper here:
> 
> https://x.com/sjgadler/status/1824245211322568903
> 
> The paper itself is really well written and reasoned. If you don't
> have a ton of time, you can come away with a good idea of what the
> paper is about by just reading the 3 page Executive Summary:
> 
> https://arxiv.org/pdf/2408.07892
> 
> The TL;DR is: This community is well positioned to do something about
> online deception, defense against AI amplification attacks, and proof
> of personhood credentials. So the question is -- should we? What could
> be the benefits to society? What are the dangers to privacy and civil
> liberties? As always, interested in your thoughts... :)
> 
> -- manu
> 
> -- 
> Manu Sporny - https://www.linkedin.com/in/manusporny/
> Founder/CEO - Digital Bazaar, Inc.
> https://www.digitalbazaar.com/
> -- This message may contain information that is not intended for you. If you are not the addressee or if this message was sent to you by mistake, you are requested to inform the sender and delete the message. TNO accepts no liability for the content of this e-mail, for the manner in which you use it and for damage of any kind resulting from the risks inherent to the electronic transmission of messages.

Received on Monday, 19 August 2024 14:52:34 UTC