Re: new ACLU Legislative Guidance

On Fri, Oct 11, 2024 at 10:17 PM Adrian Gropper <agropper@healthurl.com> wrote:
> Manu’s review is a good start. But the admittedly good intentions of our community must not pave the road to digital hell.

Oh, yes, clearly. I don't think any of us get out of bed in the
morning so sure of ourselves that we blindly proceed without careful
deliberation on the technical, legal, ethical, moral, and political
choices that are being made as this technology goes into production.

I also hope that no one in this community is under the impression that
any of us have this all figured out. We don't... but when stuff like
this ACLU report comes out, we talk about it and debate it openly,
which is not happening for many of the alternative technologies you
mentioned.

That we are able to have public discussions, and have been having
these discussions for over a decade in this community, and have been
acting on the outcomes of those discussions in ways that result in
global standards that attempt to address the ACLU, EFF, and EPIC's
concerns (many of them valid) is one of the more important aspects of
this community. This is what I was getting at wrt. building these
technologies in the open, in full transparency, for all aspects of the
design, incubation, standardization, and deployment process.

> In my opinion, our community has left the hard work of protecting human rights to politicians and lawyers at almost every fork in our journey.

You paint with too broad of a brush. I know many people in this
community that hold protecting human rights as a necessary duty of
care -- you're not the only one, Adrian. :)

In fact, of those that have been with the community the longest, and
have built and deployed systems into production... of those
individuals, I know of no one that does not care for or blatantly
disregards human rights. On the contrary, I can't think of a single
person that wouldn't be deeply disturbed and saddened if the
technology they are building is used to violate human rights.

That doesn't mean it can't happen. I know many of us are regularly
concerned of the "unknown unknowns", the unintended side effects of
the technologies we are building. There's only so much a tool can do,
and at some point, the law needs to step in and take over. We don't
make laws at W3C, we standardize technologies, but that doesn't mean
those technologies are not guided by principles and ethics.

Some further thoughts on your points below...

> - we went ahead without participation by EFF, ACLU, and EPIC

Definitely not true. I have personally reached out to each of those
organizations, and others, and requested that they engage and
participate in the standards setting process and have done so for
years. I know others in this community that have done the same, and
they have engaged, and continue to engage (per the article that Kaliya
linked to that kicked off this whole discussion). Perhaps not as much
as we'd like, and perhaps not in the way that you'd prefer, but it's
not true that we are proceeding without participation.

> - we combined non-human use cases like supply chain management with human ones

Verifiable Credentials are a generalized technology that enables an
entity to say anything about anything. There is no differentiation or
"combining" of use cases there and I have no idea how we'd try and
force that if we thought it was a good idea.

That said, the requirements for non-human use cases are different than
ones involving humans, and in those cases, many of us building these
standards and solutions are keenly aware of that difference and the
human rights implications.

I don't really understand what you're getting at here.

> - we completely ignored the role of biometrics

Did we? How? I don't know if you're arguing for more biometrics, less
biometrics, or no biometrics. What is "the role" and what were you
hoping would happen?

> - we relied too much on chain-of-custody models that promote coercive practices

Can you provide an example of a "chain of custody model" that the
community is promoting?

> - we ignored the importance of reputation and other Sybil-resistance issues in practical applications

My recollection is that we've spent considerable time talking about
reputation and sybil-resistance in this community, and that is largely
what drove some of the privacy-preserving solutions that have been
deployed to date. What else were you hoping would happen that isn't in
process or might not happen?

> - we ignored the fundamental need for delegation in human affairs

While I agree that we're not there yet and need to focus more on this
in the coming years... that we "ignored" it seems a bit much. What
would an ideal solution look like to you?

> - we were very sure of ourselves even as ISO, Clear, and id.me gained scale

"Sure of ourselves", in what way? In that we have failed to stop the
worst parts of ISO mDL, Clear, and id.me from being pushed to
production and large scale? We all know that the best solution
sometimes doesn't win in the short term, and sometimes even fails to
win in the long term. That doesn't mean we should stop trying to make
the world a better place by putting well thought out, viable
alternatives into the market.

The notion that Verifiable Credentials would be one of the global
standards used by some of the largest nation states on the planet was
unthinkable when we started this bottom-up movement over a decade ago;
that the technologies we have created here are peers to initiatives
put forward by far more monied and/or powerful interests continues to
fill us with awe, even though that was always the plan.

I don't know about others, but I'm certainly not so sure that the most
ideal, secure, privacy-respecting, and human rights-respecting
technologies will win in the end. We've certainly got some real
stinkers in the market now that are in use. I hope the best
technologies win in the end, but that's the struggle many of us have
signed up for knowing full well that none of this is guaranteed, a
straight path, or an easy win.

There are monied interests that benefit from the status quo or are
pushing towards what we believe to be dystopian outcomes. I know we
won't get to a better future if we don't keep going. We have to keep
fighting for what we believe is the best thing for global society...
and that is inclusive of technology, ethics, model legislation, and to
your point, human rights.

You might be mistaking being boundlessly determined for being too sure
of ourselves. The former can be filled with doubt and dread while the
latter is not. I'd put most of those that are contributing in this
community to be simultaneously filled with doubt and dread while being
boundlessly determined. I can understand how that might come across as
misplaced confidence to some.

> I appreciate Manu’s academic review but I see little indication that our community is heading to a healthy outcome.

Then what needs to change, Adrian? Can you define what you mean by a
healthy outcome? This is an open forum, those that debate, design,
incubate, build, and deploy in this community have moved the needle in
positive ways over the years. What concrete, additional set of actions
do you think we should be taking?

-- manu

-- 
Manu Sporny - https://www.linkedin.com/in/manusporny/
Founder/CEO - Digital Bazaar, Inc.
https://www.digitalbazaar.com/

Received on Saturday, 12 October 2024 18:22:54 UTC