W3C home > Mailing lists > Public > public-credentials@w3.org > June 2015

Re: Mitigating DDoS via Proof of Patience

From: Eric Korb <eric.korb@accreditrust.com>
Date: Mon, 29 Jun 2015 13:47:29 -0400
Message-ID: <CAMX+RnAMaTy8fsQe082+w_yVMxWMnkj9OPdkhy1KthOvvQHQOQ@mail.gmail.com>
To: Dave Longley <dlongley@digitalbazaar.com>
Cc: Melvin Carvalho <melvincarvalho@gmail.com>, Credentials Community Group <public-credentials@w3.org>
+1 Dave & Manu

With respect to adoption:  Yes, it will take the big players to join in.
But, so far, I don't see widespread adoption of anything that is as simple
to implement as we're proposing.  I believe that's because we don't have
special interests in mind.

Our goals are: portability, extensibility, sustainability and
pseudo-anonymity.  I believe the proposed approach addresses all this.
Now, let's focus on delivering a possible solution.

Accreditrust has potentially millions of credentials that will be ready for
issuance in the coming months -  if the approach comes-up short we'll all
work hard to fixing it.  But at least, we'll have real use-cases in play -
no just theory.

Eric


*"Trust only credentials that are TrueCred*™ *verified."*
----------------------------------
Eric Korb, President/CEO - accreditrust.com <https://www.accreditrust.com>
GoogleVoice: 908-248-4252
http://www.linkedin.com/in/erickorb @erickorb @accreditrust

On Mon, Jun 29, 2015 at 12:07 PM, Dave Longley <dlongley@digitalbazaar.com>
wrote:

> On 06/29/2015 11:11 AM, Melvin Carvalho wrote:
> >
> > No, I totally agree, it's a hard problem.  What you need is to cache the
> > link from a time when the resource was there, or to get an out of band
> > link, got some other kind of meta credential showing that link.
>
> Simple caching won't work, you need a system of trust around it.
> Consumers of credentials certainly won't have cached the credentials for
> everyone on the Web, and they don't know who is going to visit their site.
>
> >
> > Yep, it is definitely complex.  Maybe the old credential site could help
> > facilitate the move.  I think whatever the solution it involves caching
> > of the previous content, right?
>
> "Maybe X could help" still isn't the detail we need.
>
> > OK, well let's acknowledge that it's a VERY hard problem to solve, with
> > or without web architecture.
> >
> > I think perhaps two conversations are going on.
> >
> > There's going to be a few ways to do it, all complex.  Independent on
> > working out a solution to the problem, we need to work out the cost
> > benefit here, if it's not an edge case, is it something in the critical
> > path or optional.
>
> This is what I was saying in my original email. There's going to be
> complexity somewhere. So the choice is where you put it. I recommend
> away from people (the users of the system) as much as possible. Bury it
> in the infrastructure.
>
> > A clean modular way of doing this, means each person can take the
> > solution they prefer.  Perhaps this was what was in mind already, but it
> > wasnt 100% clear.  So sorry, if we're discussing two separate points at
> > the same time.
>
> People should be able to do that with URLs as identifiers. However,
> we're going to want a common, simple use case for most people because
> they don't *want* to pick a solution. They don't know enough about the
> problem and they don't want to even know it is a problem. They just want
> to be able to get credentials, use them, and not lose them as they make
> different choices on the Web.
>
> In fact, relatively speaking, almost no one wants to know to details
> here. Even for us technologists, having to think about the details at
> all is tedious.
>
> >
> > OK, that sounds excellent, and mitigates a lot of what I've said above.
> > I'm concerned in general about how prominent this feature would be, wrt
> > increasing overall complexity vs utility.  That's just a subjective
> > viewpoint, totally respect your point of view -- just from experience
> > people starting new URI schemes tend to underestimate the challenge of
> > integration.
>
> Don't get me wrong, your point is well taken. We want this work to
> succeed -- and we're trying to build it such that it can on various levels.
>
> >
> > OK, I hear you.  My replies have been quite "hand wavy", and in part,
> > that's because we seem to be at a new frontier.
>
> :)
>
> >
> > My experience of this was in creating a web version of the bitcoin block
> > chain where each block is stored in /.well-known/ni/hash.  What I would
> > have liked to do is have a sameAs relation to many different servers
> > storing those hashes, creating a distributed database or block chain.
> > That way blocks can be verified from a number of places, and new blocks
> > added (eg via HTTP PUT).  Longest chain wins ensures integrity.
>
> One way we considered doing `dids` was via the blockchain. The problem
> is that we don't want to tie them to a single public key and we also
> don't want to use content-based addresses. Ideally, there would be some
> magic DNS tech that let anyone claim a name, at no cost, and
> cryptographically assert ownership over it. Then resolving that name
> would take you to a trusted decentralized network that is somehow
> incentivised to work. How's that for "hand-wavy"? :)
>
> With the `did` scheme, we're trying to engineer something similar to
> that but that operates at the Web (HTTP) level.
>
> >
> > Totally agree.  My preference is to create implementations, as I think
> > you do too.  I think it's going to be helpful, in this case, to have a
> > clear idea of the problem statement, and how it fits into the bigger
> > context.  Because there's a ton of applications for this technology and
> > tons of little bits that can be solved with existing stuff.
> >
> > Really appreciate you taking the time to explain.
>
> Sure. As always, we need more of this stuff written down in
> easily-consumable documentation. :)
>
>
> --
> Dave Longley
> CTO
> Digital Bazaar, Inc.
>
>
Received on Monday, 29 June 2015 17:48:17 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 11 July 2018 21:19:24 UTC