Re: the intersection between Swagger-style APIs and SSI/decentralized identity

To be clear, the proposal on the table for TxAuth isn’t to create "HTTP APIs for delegation", but rather to create delegation for HTTP APIs. So, the opposite direction. I’m currently also proposing that the process to do that will also be HTTP-based (but not an API so much as a protocol). So what does that mean for SSI-style work? A couple things that I can think of:

First, this can offer a bridge between HTTP-based applications and APIs into non-HTTP (or not-just-HTTP) systems by allowing an HTTP-speaking client to kick things off. From that transaction, you can gather claims or use the token in all kinds of non-HTTP ways. But like it or not, HTTP/JSON is really, really, really easy to write to, and it’s what developers today are familiar with. Yes, it has its drawbacks and limitations, many of which are listed below. But it still has value as a starting point for many, and so I think that a protocol with the ability to bridge these worlds is valuable.

Second, I don’t expect whatever we come out of TxAuth to :stay: HTTP-only. I am proposing that we build our protocol on top of HTTP, but that protocol and its structures can, and arguably should be, ported over to other non-HTTP stacks (probably by future working groups). However, to me that doesn’t mean “don’t depend on HTTP” in building things in TxAuth. I would rather have those translations happen from a firmly rooted protocol than from a well-intentioned interlingua. Abstractions are incredibly difficult to form from a single use case, and impossible to get right from zero points. I have seen this attempted so many times, usually to awkward results at best. SOAP comes to mind, which invented many things to allow it to go over non-HTTP stacks but ended up being used, in practice, over HTTP. This leads to a lot of redundancy and conflict between the layers. That was the real value of REST style APIs and HATEOS as a principle. 

Note that this approach is very different from what’s being described below. Both have benefits, and I’m not suggesting that this group take the same tactic that I’m proposing for TxAuth, just explaining what I’m hoping to see there. 

 — Justin

> On Dec 30, 2019, at 2:54 PM, Daniel Hardman <daniel.hardman@evernym.com> wrote:
> 
> Markus has recently proposed a work item for the CCG to develop Swagger-style APIs for issuers and verifiers. Justin has recently proposed a charter for a TxAuth group to start working on HTTP-based APIs to accomplish delegation. Digital Bazaar has advocated their RESTful Credential Handler API (CHAPI) in CCG and other circles as well. No doubt many others on this thread are aware of efforts to standardize APIs with similar style and similar intersection to the SSI/decentralized identity space.
> 
> I would like to raise a red flag about such efforts, and trigger a thoughtful follow-up conversation. I love Swagger-style APIs and have advocated them extensively at past junctures of my career, but I am concerned that they are exactly the wrong thing to standardize right now. I'll explain my concerns below, in red, and then offer an alternative path that has most of the same benefits, in blue.
> 
> 1. RESTful APIs are web-only.
> 
> How do two farmers with cheap android devices in the highlands of Bolivia (or a Canadian Mounty and a speeder on a lonely highway in Yukon Territory, or friends in Frankfurt whose cell service has a brownlout, or two anti-government protesters) use RESTful APIs to transact business? If the answer is, "they subscribe to a service in the cloud so they can talk device-to-device," I hope we are embarrassed. We need something that also works over BlueTooth and email and other transports where a web server isn't a component.
> 
> By standardizing a solution that doesn't think about these scenarios, we are further marginalizing them, and we are enthroning a you-must-be-connected-and-you-can-be-surveilled model that guarantees it-isnt-a-standard FUD for any other efforts to fix the problem.
> 
> 2. RESTful APIs perpetuate the PKI model that we claim to be replacing with DIDs.
> 
> Servers are authenticated in these APIs with a cert. Clients are authenticated with a session that follows from an OAuth token or an API key or basic auth material. It is possible to imagine "DID Auth" being used for a client of a RESTful API, and there have been several efforts to describe and standardize such a thing. So far, none has meaningful traction, so all APIs in the SSI space that use APIs are also allowing non-DID authentication for clients. But even if we solve the problem for the client side, nobody is proposing to solve the problem for the server side. Institutions in these APIs don't use DIDs for anything meaningful. Thus none of the decentralized properties of DIDs are brought to bear for the server side of the interaction, and any decentralized qualities of DIDs are relegated to minor, optional status for clients.
> 
> 3. RESTful APIs foster a power imbalance
> 
> What if there's a standard way for institutions to be a verifier or issuer, but no way for ordinary people to be a verifier or issuer? Or a standard way for institutions to delegate, but no way for ordinary people to do it? That's effectively what APIs like the ones I mentioned above guarantee. There are multiple reasons why, including:
> Institutions have web servers; farmers in Bolivia don't. (Saying that they *could* is not helpful; we're just creating more adoption burden for SSI by making the tech harder and more expensive for them.)
> Servers can't make the first move in RESTful APIs; everything begins when a client initiates the interaction. This makes it natural for an institution (or a regulatory regime) to impose terms of service or reputation criteria on a client, and unnatural for a client to do the opposite. It's also convenient for hackers and surveillers, since they know they can catch all interactions at inception by simply listening on the server.
> Because these APIs are online-only, and because the server always waits for the client to make the first move, they can only be operated by those who have a 24x7x365 cloud presence. Institutions and ordinary people don't have equal access to 24x7x365 cloud capabilities.
> Because these APIs are secured on the server side by a cert, they can only be operated by those who have access to expensive, premium, centralized reputation. Again, institutions and people don't have equal access to this.
> This is not an exhaustive list of my concerns, but I think it's enough to trigger a conversation.
> 
> Proposed Alternative
> 
> We create web APIs, but we think about them differently. We conceive of all of them as exchanges of messages that could also be accomplished over BlueTooth, email, etc. HTTP(S) is just another transport, where messages happen to be passed by HTTP POST (or GET, as appropriate). Security properties associated with the exchange are based on the control of DIDs and embodied in the messages themselves (e.g., through encryption/signing), not in a transport layer. All semantics for the interaction are conveyed by the message content. The traditional URL namespacing of Swagger can still exist, but it becomes less interesting, since the message content must be enough to convey semantics on its own (so the messages are enough in other transports). Either party can initiate an interaction. Institutions and people are actually peers.
> 
> This is the world of didcomm and application-level protocols built atop it; it's described in Aries RFC 0003 <https://github.com/hyperledger/aries-rfcs/blob/master/concepts/0003-protocols/README.md>. But before folks get their hackles up about not wanting to work with Aries or DIF, note that I didn't propose that Aries protocols have to be the basis for this approach. Instead, I proposed some characteristics we need to avoid. Can we explore that assertion on its merits, without getting immediately entangled in politics?
> 
> There are already maybe 8-10 software stacks, some independent from top to bottom, that implement an "API" for issuing credentials and an "API" for verifying presentations based on the model I just articulated. These implementations are demonstrably interoperable with one another. By proposing a new work item for a Swagger API for issuance and verification, we are walking away from interoperability with these implementations, and we are incurring all of the architectural disadvantages I highlighted in red above.
> 
> What if we did this instead?
> Agree that for issuance and verification, the goal about payloads and sequencing for HTTP calls should be alignment between those defined in the Aries protocol and those used by people who aren't Aries-centric. This could involve give and take in either direction; I'm not proposing that it has to be done by simply adopting the Aries work.
> Agree not to depend on HTTP-specific constructs (e.g., HTTP headers, HTTP status codes) to signal important semantics--so the payloads could be exchanged over BlueTooth or email just as easily.
> Agree that, while URL namespacing gives us a nice hook into Swagger-oriented tools, all semantics required for the interaction are detectable from the payloads themselves.
> Then we'd have an HTTP solution that is Swagger-compatible but not limited to HTTP. The "API" we created would be interoperable on a much broader canvas than simple web.
> 
> If we further agreed to this:
> Authentication of both parties in the interaction will be done on the basis of DID control, not on the basis of certs.
> Then we'd also eliminate the dependence on the web's flawed PKI model, and the power imbalance of today's web. But I know this is more controversial.
> 

Received on Tuesday, 31 December 2019 13:02:36 UTC