Re: Use-Cases - pseudo-anonymity examples

On 3/2/16 5:32 PM, Timothy Holborn wrote:
> High-risk vs. low-risk seems to be the wrong analogy.
>
> What is the means for declaring security between 99.999999% secure to
> 0.0000001%. ??
>
> Two gears, high and low, would likely isolate too many use cases and
> result in setting the bar too low, as applaud to rationalising why
> some of the higher end, life threatening stuff is difficult if not
> impossible to promise.

Interesting; perhaps that is so. But "99.999...--0.00001" continuums 
are only good for people who like mathematics. I do; I assume you do; 
but my sister, for instance, would be at a complete loss.

To function as a system that everyone can interact with -- developers 
right across to end-users -- I think there has to be a recognition of 
what the psychologists have identified: that average humans can hold 
between three and seven pieces of information in their mind at one 
time. (I know this is old news, but I recently saw a report of a study 
that reconfirmed this from a new experimental angle, so I think it's 
fair to throw it in here).  Even people adept with math probably 
wouldn't like to be presented with more than seven levels to choose 
from. The NASCAR problem.

In other words, to get global adoption, we can split 'privacy' into, 
maybe, high-medium-low. A choice of three. I think my sister could 
handle that. She could decide whether she needed more privacy than 
most people (high), about the same as most people (medium), or less 
privacy than most people (low).

But I do agree that just high and low is a greater risk than 
necessary, and risks losing too many important cases. Maybe the 
Charter could make reference to such a splitting of cases into needing 
low, medium, high, and make reference to the fact that low and medium 
privacy will be required to be covered in the WG, as separate cases, 
and that high will be defined as beyond the medium cases but not 
covered necessarily, depending on technical considerations yet to be 
worked out.

Doing this would open up a Pandora's box of argument about what 
constitutes 'medium' and what 'high', but the privacy/security box is 
wide open world-wide now anyway, so it won't be a big surprise to 
people, IMO.

Steven


>
> On Thu, 3 Mar 2016 at 6:20 AM, Dave Longley
> <dlongley@digitalbazaar.com <mailto:dlongley@digitalbazaar.com>> wrote:
>
>     On 03/02/2016 12:26 PM, Steven Rowat wrote:
>      > On 3/1/16 9:41 PM, Anders Rundgren wrote:
>      >> Pardon the naive question (I haven't followed the credentials
>     work in
>      >> detail), but how is link between the credential and the
>     documents it is
>      >> supposed to be associated with?
>      >
>      > I don't know. I was assuming in the new examples I provided
>     (anonymous
>      > Journalist, Scientist whistle-blower, pseudonymous Novelist) that:
>      >    a)  it would turn out to be more or less the same code
>     mechanism as
>      > the existing "June and the bottle" example would need;
>      >    b)  some mechanisms for doing this have been discussed in
>     the past; and
>      >    c)  the current goal is to get the Charter accepted (work
>     protocol
>      > time-lines and use-case goals), not specific data structures.
>      >
>      > So IMO the answer to your question lies in the work that would
>     be done
>      > after the Credentials technical group is underway.
>      >
>      > But I may misunderstand the process. Can anyone else comment?
>
>     You understand the process correctly, but there is an element of this
>     that is important in what user stories we tell in the use cases we're
>     submitting for review.
>
>     As you have pointed out, scenarios that involve the use of
>     pseudo-anonymous credentials may differ quite differently in terms of
>     risk. It isn't necessarily true that the same mechanism used to
>     provide
>     pseudo-anonymity in low-risk scenarios would be the same as the
>     one used
>     in high-risk scenarios.
>
>     People reviewing the charter and use cases may look at high-risk
>     scenarios and reason that the problem is too difficult to solve and
>     decide to vote against the work proceeding. I myself think that there
>     are high-risk pseudo-anonymity use cases that are not solved nearly as
>     easily or via the same mechanisms as low-risk scenarios.
>
>     I think it's a good idea to keep high-risk scenarios around as targets
>     for future work, but I don't think we should say we need to solve them
>     in our first attempt to get work started. I would prefer to keep such
>     use cases in our community group's "vision document" or "larger set of
>     use cases for the future". I think they could be a distraction and
>     harm
>     our chances to get work started.
>
>
>     --
>     Dave Longley
>     CTO
>     Digital Bazaar, Inc.
>     http://digitalbazaar.com
>

Received on Thursday, 3 March 2016 03:11:42 UTC