W3C home > Mailing lists > Public > www-tag@w3.org > January 2016

Re: Thoughtful piece on the costs of the siloing of social media

From: Henry Story <henry.story@bblfish.net>
Date: Fri, 8 Jan 2016 12:53:25 +0100
Cc: Carvalho Melvin <melvincarvalho@gmail.com>, Henry Thomson <ht@inf.ed.ac.uk>, TAG List <www-tag@w3.org>, Philip Sheldrake <pcs1g15@soton.ac.uk>
Message-Id: <8B6AE887-8FAD-41F1-8EBE-1516365E63A0@bblfish.net>
To: Mark Nottingham <mnot@mnot.net>

> On 8 Jan 2016, at 04:09, Mark Nottingham <mnot@mnot.net> wrote:
> 
> [ I remember seeing that article somewhere other than the Guardian quite a few months ago, but forget where; anyone? ]
> 
> Personally, I'm very interested, but the Web as currently designed and implemented heavily encourages centralisation, and changing it is likely harder than just starting something new.
> 
> Some related thoughts here:
>  https://www.mnot.net/blog/2015/08/18/distributed_http

Very interesting read. Thaks for the link to Brewster Kahl's talk at
the Chaos Communications Congress which helps get into this:
https://media.ccc.de/v/camp2015-6938-locking_the_web_open_call_for_a_distributed_web

Here's a way of thinking of the centralisation problem in layers that I have
found helpful recently (I'll get to Brewsters decentralised view right after).
We have three layers:

 1) IPv4/6 Information layer (+1): any machine can talk to any machine to retrieve data using IPv4/6. It's a pure p2p layer.
 2) Web of Docuemts (+1): any document can link to any other document
   Pure p2p layer
 3) Web Applications (-1): most data driven apps are not cross domain


It is at layer 3 that currently the problem is being felt, and for many people
this may seem very weird: how can you have decentralisation at lower layers, and 
not higher ones? How come bytes can flow around the internet in a peer to peer 
manner but data does not? How come there are so many services that exist in any 
of a number of categories that don't interoperate?

For example: OuiShare, the European Sharing Economy conference, with 
collaborators around Europe put together a list of tools that their 
"connectors" use:

  https://trello.com/b/qPtU1EbQ/ouishare-collaboration-tools

There are 13 categories of tools, hardly any of them really interoperate. Each
time people want to work together they need to start from scratch and find a new
tool that they all agree to work on together. This has a huge cost.

So we don't just have centralisation: we also have fragmentation.
Ie. we don't have linkability in the data world. Or rather we only have 
linkability at the data layer within a single service, except for a 
few cases such as RSS feeds. 

  We have hyper text but not hyper data.

( well actually we are working on HyperData based apps 
  - High lieve concept http://hi-project.org/
  - Social Linked Data spec: https://github.com/solid/solid-spec
)

Now what Brewster Kahl wants is something more than this. They are thinking
p2p for resources so that these can be spread around and duplicated across
servers. I don't think of this as incompatible with the current web: it just
requires a new resource discovery protocol ( something like bittorent ) and new
URLs for those resources, which could in any case map to http urls.

 	If you listen to Brewster's answers  to the questions in the CCC talk it 
seems he is still thinking very much of a world  of documents. But actually 
what he should really want, given his examples of large centralised providers, 
is a distributed replicated _data_ web.  Then the client could actually follow the 
data around  and build up an interface for the user's particular needs 
( http://hi-project.org/ )

Given that the semantic web itself is based  on URIs and so is protocol agnostic,
there is no problem connecting data published on http, https, onion, or other protocols.
Logically this has already been dealt with by the w3c.

More intriguing is how one could have distributed versioned data where some data is 
access controlled. The data would have to be encrypted, but if one gave anyone the key,
that person could give anyone else the key too - but perhaps that's not more of a problem
than when someone copies and republishes a document that is access controlled.

So in summary:
  - the problem of centralisation/fragmentation is occuring at the data layer
  - the answer to that is using linked data
  - building replicated version data protocols
     + will make linked data even more important
     + is not incompatible with the current web architecture
  
Henry Story
http://co-operating.systems/

> 
> Cheers,
> 
> 
>> On 8 Jan 2016, at 11:55 am, Melvin Carvalho <melvincarvalho@gmail.com> wrote:
>> 
>> 
>> 
>> On 5 January 2016 at 20:51, Henry S. Thompson <ht@inf.ed.ac.uk> wrote:
>> http://www.theguardian.com/technology/2015/dec/29/irans-blogfather-facebook-instagram-and-twitter-are-killing-the-web
>> 
>> This is a really interesting piece, thanks for sharing.
>> 
>> The web does seem to have become more centralized in the last few years.  I dont know how much of this is architectural, and how much behavioral.
>> 
>> The architectural foundations of the web as a cross origin document (and data) space, are I think, quite strong, leading to a good degree of decentralization.  I dont know why the web may be becoming more centralized, I once heard someone say "no matter how decentralized you design a system, centralization creeps in through the back door".  
>> 
>> My personal preference would be to see a healthy centralized and healthy decentralized element of the web competing with each other and offering greater user choice.  But we dont seem to live in that world, right now, at least.  
>> 
>> One factor, imho, is that there are probably orders of magnitude more people working on centralized solutions, than on decentralized.  Also decentralized solutions are fragmented, due to design decisions that get in the way of interop (tho interop is hard at the best of times).
>> 
>> Im not sure what the TAG can do about this, or even how many on the TAG list still are interested in a decentralized web (tho I know TIm is).  One thing that may be valuable is guidelines to developers building decentralized solutions on how to prevent fragmentation, and how to encourage interop.  It's a difficult problem to talk about, let alone to solve!
>> 
>> 
>> 
>> ht
>> --
>>       Henry S. Thompson, School of Informatics, University of Edinburgh
>>      10 Crichton Street, Edinburgh EH8 9AB, SCOTLAND -- (44) 131 650-4440
>>                Fax: (44) 131 650-4587, e-mail: ht@inf.ed.ac.uk
>>                       URL: http://www.ltg.ed.ac.uk/~ht/
>> [mail from me _always_ has a .sig like this -- mail without it is forged spam]
>> 
>> 
> 
> --
> Mark Nottingham   https://www.mnot.net/
> 
> 
Received on Friday, 8 January 2016 11:53:57 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:57:13 UTC