Re: wikileaks - Web Architecture and Robustness

On Wed, Dec 1, 2010 at 10:31 PM, Tim Berners-Lee <timbl@w3.org> wrote:
> Karl,
>
> All good points.
>
> I brought up my desire to extend HTTP to allow it to gracefully switch to p2p under stress
> at the last TAG afce-face meeting.
>
> Tim

Agreed - I've been harping on 'HTTP stress' for a while now, first in
the context of semantic web robustness and then regarding persistent
reference. Short term DOS attacks are technically similar to long term
service degradation (loss of server, of domain name, of
infrastructure, etc.); we just don't see the similarity because of the
different time scales involved.

The problem, it seems to me, is that the AWWW "URI owner" and HTTPbis
"authorized response" concepts require a continuous, coordinated chain
of custody from URI creation to URI service, mainly due to DNS. If
that chain gets broken, the URI fails. This is essentially why http:
URIs do not at present serve very believably as "names".

The fix is conceptually simple: separate minting rights from service
rights. The right to mint a URI continues to be determined by
"ownership", but the right to provide service could be granted to
anyone who meets terms dictated by the owner. You might call this a
"license to serve," and it could optionally be an open offer not
requiring coordination. Once obtained, the right to serve could be
exercised completely independently of the later condition of the URI
"owner", the DNS, the Internet, etc.

For example, the "license to serve" for
http://example.com/communist-manifesto might say that anyone, in
perpetuity, can provide service for this URI as long as what's
delivered is the Communist Manifesto by Marx and Engels as published
in 1848 (fine print fine print).

Essentially the URI owner has to *give up* some degree of ownership
(rights) if the URI is to be DOS-resistant.

We already have this "license to serve" feature in a weak form in HTTP
cache control (Expires:, etc.), but it doesn't go far enough.

Open service licenses would enable competition for the privilege of
providing service. P2P would be one option of course, but there are
others. For example, an ISP provisioning Marx & Engels really well
could extract a tax in the form of an advertising load. (Sorry Karl!)

Obviously the issue of attacks would have to be dealt with, but it
looks like Bryan et al. and many others are all over this. (thanks
Mark N)

Hey, the terms of the license might even be provided in
machine-readable form, and made discoverable via Link: header or
/.well-known/host-meta or <link> or RDFa... .

Jonathan

> On 2010-12 -01, at 20:02, Karl Dubost wrote:
>
>> There's a recent blog post from Ethan Zuckerman about Amazon and wikileaks.
>>
>> Summary: I will not introduce wikileaks, you should know what is happenning, except if you have lived under a rock these last few weeks. Wikileaks in the last few days had a few massive DDOS. They decided to switch the hosting of their assets from their servers in Sweeden to the cloud service provided by Amazon. But Amazon decided to unplug wikileaks. They had to go back to their own servers.
>>
>> Why I'm talking about this:
>> It is important to see the interaction of the Web architecture and its strengths and weaknesses when the stakes are very high. There are a few things into play:
>>
>> * Political
>> * Law
>> * Technical robustness
>> * Information flow
>>
>> As Ethan mentionned, the documents themselves are distributed through bittorrent and were then not really affected by the DDOS. But the Web presence was definitely. I was then wondering what would be the part of Web architecture that would to be improved or modified to be as robust as bittorrent in such circumstances without losing the benefits of URI.
>>
>>    In …My heart’s in Accra » If Amazon has silenced Wikileaks…
>>    At http://www.ethanzuckerman.com/blog/2010/12/01/if-amazon-has-silenced-wikileaks/
>>
>>    Update: It’s worth mentioning that Wikileaks is
>>    using peer to peer networks to distribute the
>>    actual cables. DDoS may be effective in removing
>>    their web presence, but it’s going to have a much
>>    harder time removing the sensitive material from
>>    the internet. The DDoS attacks are actually a
>>    useful reminder that we still don’t have a good
>>    way to serve web sites on a purely peer to peer
>>    architecture. That would be one response to the
>>    problems of consolidation I’m talking about here…
>>
>>
>> --
>> Karl Dubost - http://dev.opera.com/
>> Developer Relations & Tools, Opera Software
>>
>>
>>
>
>
>

Received on Thursday, 2 December 2010 14:12:19 UTC