Re: Chartering work has started for a Linked Data Signature Working Group @W3C

> On 26. May 2021, at 02:19, Harry Halpin <hhalpin@ibiblio.org> wrote:
> 
> Hi, I'd like to point out that while there are no peer-reviewed publications on the (lack of) security of the so-called Verifiable Credential architecture and the problems with this kind of standardization of things like "Linked Data Proofs", In fact, published a peer-reviewed article explaining in detail how utterly broken the entire DID and Verified Credential architecture is, and how this is damaging the credibility of the W3C, and it was presented at Mozilla. It's online here:
> https://arxiv.org/abs/2012.00136

Harry Halpin's paper is difficult to read as it is so full of misrepresentations and exaggerations  that seem to show more anger than careful thought. Here are a few extracts with comments that make the point.

> Currently all proposed immunity credential schemes rely on an obscure standard, the W3C Verified Credential Data Model 1.0 standard

Why the need for ”obscure”?

> the claims are not a simple list of attribute value pairs or even arithmetic circuits that could be verified in zero-knowledge, but instead a graph build from the nearly forgotten W3C Semantic Web standards

”The nearly forgotten W3C Semantic Web standards”?

There is a whole passage that tries to build some guilt by association by tying the
Semantic Web to the Department of Homeland Security. That type of reasoning I guess
works well in some circles.

Another passage tries to oppose the IETF and the W3C as good vs
evil, as Harry Halpin's post to this mailing list tried to do. [1]

> Linking of user data seems at odds with user privacy, as privacy is typically defined as unlinkability

Of course if one starts from such a notion of unlinkeability in the absolute, then the
whole Web and all human interaction is a mistake from the start. The claim
unlinkeability has to be put into context. Unlinkeability of what for whome? would be
a better question to start from.

> Although the idea of a registry with at least one unique identifier for every possible object that may exist could be considered itself a suspect concept for a standard,

The Semantic Web allows URIs to refer to objects, but does
not require every object to have a URI. Obviously! What would all these blank
node discussions we have been having been about :-) And even if we take all the blank
nodes and URIs of all the documents published on the semantic web, nobody has made
a claim that all objects had to be referred to directly.

Talking of Solid we have
> Currently backed by a startup called Inrupt,the platform is build in Javascript.

I am building a Solid server in Scala, and there are many other
implementations, including Java and Go, ...
There is clearly a lot of work to be done there, but why simplify here suddenly?

> The fact that specifications like W3C Verifiable Credentials even became standards is problematic without security and privacy review by experts,

There are quite a few experts on the group in the area of Credentials. For example
Prof David Chadwick who is one of the Authors has been writing on the subject for
30 years or more.
https://www.kent.ac.uk/computing/people/3071/chadwick-david

Harry Halpin's whole criticisms of immunity passports presents the work
from the VC group as being naive, exaggerated, and nearly irresponsible.

As an antidote, I would urge people to watch the presentation by Christopher Allen
to the the California Assembly where he is careful to point out the initial
stages of the exploratory work being done.

https://lists.w3.org/Archives/Public/public-credentials/2020Jun/0050.html

Note that Christopher Allen is an author of RFC 2246 ”The TLS Protocol Version 1.0”
https://datatracker.ietf.org/doc/html/rfc2246
which shows that the divide between the W3C and the IETF is not has
deep as Harry’s writings make it out to be.

Henry Story


[1] There is indeed I think a duality between the work of the IETF and that of the
 W3C which seems to be that where the IETF works mostly on protocols - modeled
mathematically as coalgebras - the W3C has done most of its work on data formats (HTML, XML, RDF, …) which are algebraic structures.  But these dualities from category theory are not
the opposition of good vs evil. The one implies the other in a very
well defined mathematical sense. You may want to check out the
Mathematical/Philosophical thesis, by the very impressive Japanese Scholar
"Meaning and duality: from categorical logic to quantum physic"
https://ora.ox.ac.uk/objects/uuid:440a291d-7533-493d-b5aa-f6db30ca03cf




Henry Story

https://co-operating.systems
WhatsApp, Signal, Tel: +33 6 38 32 69 84‬
Twitter: @bblfish

Received on Wednesday, 26 May 2021 05:05:13 UTC