Re: Identity Hubs and Agents

>
> One thing I'd like to say just briefly: software agents cannot act as
> fiduciaries.
> Only legal persons can do that. I'm not sure the best alternative term,
> but data and information fiduciaries are one of the more important
> elements of privacy/agency that we are still figuring out. Lawyers,
> CPAs, and doctors are all fiduciaries, with an obligation to put their
> principal's
> interest first--and the obligations of each are highly differentiated and
> regulated both by law and by practice (enforced by professional code).
>

You are using a narrow, legal definition of "fiduciary" here. Under that
definition, I agree with your analysis. But that is not the definition that
Aries has been using, and if you look up the term "fiduciary" in a
dictionary, you will find that software can fill the more general
definition just fine. It doesn't have to be automated like Iron Man's
Jarvis, and it doesn't have to be a legal person, to meet this bar; it just
has to be clear about who its master is.

The distinction we are trying to get at with our use of the term
"fiduciary" is important, even if the term is imperfect. In the
surveillance economy, software that treats the user as the product (think
GMail or Facebook) doesn't have a deep duty to you as a person--and you can
tell it doesn't, if you read the 20 pages of legalese in its terms of
service.

An agent has a duty to its master, first and foremost--and ideally, to
nobody else. It can acquire indirect responsibilities (e.g., to satisfy a
government auditor) through its obligation to help its master--but it can't
directly "report" to anybody other than its master, and it can't
short-circuit its master or do things without its master's approval, and
still meet this bar. In the "real world", we think that talent agents and
real estate agents are pretty lousy agents if they don't act with firm
loyalty to the wishes of their masters, and if they don't avoid conflicts
of interest. Same principle applies in identity land. I hope that the
software on my smart phone, that helps me makes phone calls, is my
fiduciary in the sense that I can trust it to carry out exactly and only my
wishes. If the NSA has hacked it, or malicious developers have designed it,
to do other/additional things, then it's not my agent any more--or if it
is, it's a pretty lousy one.


> It's also worth noting the term user-agent, which is the accepted
> term for software, like your web browser, which acts "hand-in-glove"
> to execute the direction of the user. These are clearly not autonomous
> (although they can run autonomous apps).
>

Unfortunately, most browsers today are examples of something that is *not*
fiduciary in the sense that we care about--as anybody who has helped an
elderly relative uninstall junk addons and ridiculous home page settings
can attest. Browsers partly do the bidding of the user who drives them--but
they also do the bidding of the browser maker, and of the websites that
load them up with adware and third-party cookies and fingerprinting
algorithms. They are incredibly feature-rich, nowadays--but they are poor
fiduciaries. And that is the source of lots of privacy pain, cybersecurity
vulnerabilities, etc. This is exactly why we avoided the term "user-agent"
when describing the concept. (I admit, however, that as originally
conceived, "user-agent" might have been a reasonable synonym.)


> 1. In all of these cases, is it correct to say that the "agent" uses some
>    key material to act on behalf of its controller?
>

Yes. Agents holding keys is one of their 3 essential characteristics
<https://github.com/hyperledger/aries-rfcs/tree/master/concepts/0004-agents#essential-characteristics>
.


> 2. Do the relevant actions occur "automatically" or autonomously
>    once configured?
>

Not necessarily. An agent can be configured to always ask the owner for
permission before taking any action. I expect that most credential exchange
will work this way, with a small subset taking place more automatically.


> 3. In some cases, these keys may have attenuated delegation, correct?
>    Which is to say that different agents have different privileges.
>

Yes. This is a core characteristic. The agent~relationship plane is part of
the 3-dimensional model of identity
<https://medium.com/evernym/three-dimensions-of-identity-bc06ae4aec1c> and
is where we make exactly these choices.

>
> 4. Are there any cases under consideration where the agent (or
>    perhaps the firm who created or runs the agent) accepts legal liability
>    for its actions?
>

Yes. When you accept a car loan with an app on your phone, you are entering
into a legally binding contract using an agent that manages the keys with
which you affix your digital signature. You may have the impression that
you actually signed something, but in fact you can't consume and emit byte
streams or do public key cryptography in your head; your agent had to do
that work for you.

Note that this answer is also implied by DID-AuthN and the DID spec,
because any holder of keys that acts as a controller of a DID is able to
act as that DID in whatever digital interactions expect proof of DID
control. This makes me uneasy, because I don't think the control granted in
this way is sophisticated enough. I think we need more granular mechanisms.


> These appear to be good terms for modifying the generic term
> "agent". I'm not sure what those modifying adjectives might be, but we
> would
> do well to find terms that clarify we don't mean other kinds of agents.
> Key agent?
> Escrow agent? Autonomous agents? Empowered Agents? Cryptographic agents?
>
> Delegated Autonomous Cryptographic Agents?
>

 Please review the existing terminology before introducing new terms.
https://github.com/hyperledger/aries-rfcs/tree/master/concepts/0004-agents#categorizing-agents

Received on Wednesday, 14 August 2019 03:55:28 UTC