Re: A Decentralized Hashtable for the Web

On 11/03/2015 08:53 AM, Erik Anderson wrote:
> Its called Redis in active-active mode. Plenty of forks of redis out 
> there that can accomplish this.

Yes, that's one way to technically implement the WebDHT. There are other
ways too. The important part in standards-making is not how you
implement (although, that is important). The important part of
standards-making are messages and the protocol that you use to get
heterogeneous implementations to talk to one another.

Apache, IIS, and Nginx are all implementations of the HTTP protocol
(among other things). It's the latter that makes the Web work. WebDHT is
about the latter, not the former.

> Additionally you cant have a decentralized Hashtable until you have
> a "non-forking" trust mechanism.

Yes, that's true in general. The approach the WebDHT is going to take is
to ensure that it's very difficult to do a 51% attack by rotating the
storage location in a random way based off of entropy in the network.

> I am still not convinced you can decentralize trust.

You can decentralize it, it's just not possible to say that the entry in
the DHT is 100% accurate at all times given a nation state attacking the
network. That said, there is no trust mechanism that operates over a
network today that can make that statement.

> Even 51% attack of a proof-of-work majority is centralizing trust.

Correct - case in point, PRC mining:

https://www.reddit.com/r/Bitcoin/comments/3fjkcs/the_communist_part_of_china_controls_more_than_51/

That said, there is no proof-of-work required for the WebDHT.

> 1 bad bug in a new deployment of a decentralized hashtable could 
> destroy the entire decentralized hashtable network.

Yes, it could if the core algorithms are wrong. The same holds for all
routing algorithms in use on the Internet today, cryptography algorithms
we use to secure our traffic, mail server protocols, DNS, etc. My point
is that we've solved problems like this at the IETF and W3C before and
that has led to the Internet and the Web. This problem is no different -
we need to get the algorithms right (and we have 15+ years of collective
experience building DHTs on the Internet).

> But thats what snapshots and restore points are, right? Well you 
> still need 51% of the network to agree to rollback to a restore 
> point.

Only if you assume the network is proof-of-work based, which the WebDHT
is not. Other approaches include random oracles, NGOs, trusted notaries,
DHT cleaning, etc. We haven't made any decisions on what the best
approach for the WebDHT would be.

> Now spam the network with new hash entries and watch the hashtable 
> bloat in size and become un-maintainable.

The WebDHT uses a a new technique we're calling proof-of-patience to
reduce the effect of DDoS attacks and hashtable spamming to something
the network can handle.

> Technology is just not there and network/memory/disk resources are 
> still expensive so few are willing to maintain this.

There are strong economic incentives for organizations like identity
providers and identity banking services to maintain this sort of
network. Without the network being online and running smoothly, those
companies don't have a business. I imagine large organizations like
Google and Verisign will eventually run WebDHT nodes just like they run
popular DNS nodes today.

> We are still 10 years from seeing something like this from being
> even remotely mature enough to reach adoption.

While I disagree with your timeframe, let's assume your premise.

The work still needs to be done and it has to start somewhere. What
we're doing here is starting that work. It may take 10 years, but I
think it'll be much shorter than that. The sooner the better.

-- manu

-- 
Manu Sporny (skype: msporny, twitter: manusporny, G+: +Manu Sporny)
Founder/CEO - Digital Bazaar, Inc.
blog: Web Payments: The Architect, the Sage, and the Moral Voice
https://manu.sporny.org/2015/payments-collaboration/

Received on Wednesday, 4 November 2015 06:47:50 UTC