Re: Use-Cases - pseudo-anonymity examples

On Thu, 3 Mar 2016 at 14:12 Steven Rowat <steven_rowat@sunshine.net> wrote:

> On 3/2/16 5:32 PM, Timothy Holborn wrote:
> > High-risk vs. low-risk seems to be the wrong analogy.
> >
> > What is the means for declaring security between 99.999999% secure to
> > 0.0000001%. ??
> >
> > Two gears, high and low, would likely isolate too many use cases and
> > result in setting the bar too low, as applaud to rationalising why
> > some of the higher end, life threatening stuff is difficult if not
> > impossible to promise.
>
> Interesting; perhaps that is so. But "99.999...--0.00001" continuums
> are only good for people who like mathematics. I do; I assume you do;
> but my sister, for instance, would be at a complete loss.
>
> Computers use binary, code is math.  So, the challenge is always to
translate the receipt to something meaningful for a customer whilst using
science (math) to protect them from themselves and others.

arguments always exist around how much someone should be protected from
themselves.

Similarly arguments about the nature of what values need to be protected
are always up for discourse.  Many considerations of the past have later
found to be false or untrue.  Absolute security results in a lack of
flexibility, which at every level, from the selection of a president
through to the nature of a seemingly straight forward court-order:
https://www.youtube.com/watch?v=HqI0jbKGaT8

The unfortunate situation is that without the means for accountability;
issues are less able to be evaluated by others.


The binary nature of 'high stakes' vs. 'low stakes' was and is really
useful on a lay level; however, with tools such as ontologies, i think it
is worthwhile considering variations in how we explain the differentiation
between what we claim is possible and the means for risk-declarations, as
to qualify the claims.  Or some such further consideration beyond simply
notating 'high stakes' and 'low stakes'.



> To function as a system that everyone can interact with -- developers
> right across to end-users -- I think there has to be a recognition of
> what the psychologists have identified: that average humans can hold
> between three and seven pieces of information in their mind at one
> time. (I know this is old news, but I recently saw a report of a study
> that reconfirmed this from a new experimental angle, so I think it's
> fair to throw it in here).  Even people adept with math probably
> wouldn't like to be presented with more than seven levels to choose
> from. The NASCAR problem.
>
> the psychology problem is complex, and often also relates to conscious vs.
subconscious; alongside issues of subversive misinformation vs. learnings
of true information, and tactical advantages that can be used in exploiting
that capability in areas such as 'plausible deniability'  or indeed
specificities in court-cases as to use legal tactic to obtain a particular
outcome, which isn't reflective of the broader issues that resulted in a
legal dispute and related strategic actions.

In other words, to get global adoption, we can split 'privacy' into,
> maybe, high-medium-low. A choice of three. I think my sister could
> handle that. She could decide whether she needed more privacy than
> most people (high), about the same as most people (medium), or less
> privacy than most people (low).
>
> I think perhaps an ontology?  really not sure.  I think the sum or dial
like UI element may be created through the application of 'check boxes' but
the considerations are far broader than simply creds.

With Creds in mind specifically; I've tried to consider the array of
considerations that are within view both internal to design, and through
collaborative efforts with other W3C Groups.


> But I do agree that just high and low is a greater risk than
> necessary, and risks losing too many important cases. Maybe the
> Charter could make reference to such a splitting of cases into needing
> low, medium, high, and make reference to the fact that low and medium
> privacy will be required to be covered in the WG, as separate cases,
> and that high will be defined as beyond the medium cases but not
> covered necessarily, depending on technical considerations yet to be
> worked out.
>
> Doing this would open up a Pandora's box of argument about what
> constitutes 'medium' and what 'high', but the privacy/security box is
> wide open world-wide now anyway, so it won't be a big surprise to
> people, IMO.
>
> Agreed, yet i'm not convinced about the 3 setting model.

https://www.newscientist.com/article/2078419-apple-vs-fbi-who-will-win-this-struggle-for-power/

When seemingly 'low-stakes' applications of the technology are used for
trust purposes in an isolated manner, unintended consequences and/or
capacities to leverage as exploits; can be produced.

IMHO: There is an underlying issue that relates to 'pandora's box'.  What
is the meaning of democracy and rule of law.  What is the meaning of
sovereignty, and the vote of the people.

I understand people will have different views ranging from religious
beliefs, to one-world-government beliefs, to can't trust anyone but the
technology (forgetting companies build technology in countries and no-one
lives in a vacuum), etc.

A semantic graph of current 'silos:
https://www.w3.org/DesignIssues/CloudStorage.html ' as an example, has
clear differentiations between what is possible in a world that depends on
RDBMS systems vs. those that leverage Linked-Data.

Understanding the desires of different stakeholder groups is important in
understanding the future possibilities for humans.  It would be better if
we had an honest foundation for the considerations therein, yet, i am
concerned about the 'pandora's box' implications and how that may influence
our capacities for verifiable, tamper evident implementations of
linked-data related capabilities that would in-turn provide significant
scientific advancement for humanitarian progress, whilst potentially also
being quite disruptive to systems that were not functioning very well, but
wasn't easy to understand the issues encumbering the operations of those
systems due to lack of verifiable and accessible data.

I'm particularly interested in Human rights use-cases with special regard
for those that impact the nature of the world as experienced by children,
for whom we hold moral responsibility.

I also think that depending on how people got to a position of influence,
this stuff might really frighten them. I don't think it should.

Tim.


> Steven
>
>
> >
> > On Thu, 3 Mar 2016 at 6:20 AM, Dave Longley
> > <dlongley@digitalbazaar.com <mailto:dlongley@digitalbazaar.com>> wrote:
> >
> >     On 03/02/2016 12:26 PM, Steven Rowat wrote:
> >      > On 3/1/16 9:41 PM, Anders Rundgren wrote:
> >      >> Pardon the naive question (I haven't followed the credentials
> >     work in
> >      >> detail), but how is link between the credential and the
> >     documents it is
> >      >> supposed to be associated with?
> >      >
> >      > I don't know. I was assuming in the new examples I provided
> >     (anonymous
> >      > Journalist, Scientist whistle-blower, pseudonymous Novelist) that:
> >      >    a)  it would turn out to be more or less the same code
> >     mechanism as
> >      > the existing "June and the bottle" example would need;
> >      >    b)  some mechanisms for doing this have been discussed in
> >     the past; and
> >      >    c)  the current goal is to get the Charter accepted (work
> >     protocol
> >      > time-lines and use-case goals), not specific data structures.
> >      >
> >      > So IMO the answer to your question lies in the work that would
> >     be done
> >      > after the Credentials technical group is underway.
> >      >
> >      > But I may misunderstand the process. Can anyone else comment?
> >
> >     You understand the process correctly, but there is an element of this
> >     that is important in what user stories we tell in the use cases we're
> >     submitting for review.
> >
> >     As you have pointed out, scenarios that involve the use of
> >     pseudo-anonymous credentials may differ quite differently in terms of
> >     risk. It isn't necessarily true that the same mechanism used to
> >     provide
> >     pseudo-anonymity in low-risk scenarios would be the same as the
> >     one used
> >     in high-risk scenarios.
> >
> >     People reviewing the charter and use cases may look at high-risk
> >     scenarios and reason that the problem is too difficult to solve and
> >     decide to vote against the work proceeding. I myself think that there
> >     are high-risk pseudo-anonymity use cases that are not solved nearly
> as
> >     easily or via the same mechanisms as low-risk scenarios.
> >
> >     I think it's a good idea to keep high-risk scenarios around as
> targets
> >     for future work, but I don't think we should say we need to solve
> them
> >     in our first attempt to get work started. I would prefer to keep such
> >     use cases in our community group's "vision document" or "larger set
> of
> >     use cases for the future". I think they could be a distraction and
> >     harm
> >     our chances to get work started.
> >
> >
> >     --
> >     Dave Longley
> >     CTO
> >     Digital Bazaar, Inc.
> >     http://digitalbazaar.com
> >
>
>

Received on Thursday, 3 March 2016 03:42:58 UTC