Re: Evolution of the RWW -- a Temporal Web -- Towards Web 4.0 (?)

On Fri, 14 May 2021 at 18:25, Timothy Holborn <timothy.holborn@gmail.com>
wrote:

> Is it easier / better / faster - to do work as the 'solution providers' or
> 'the enemy'?
>
> If we went about defining 'specs' for 'mass personalised propaganda' 'ai
> capabilities', distoriting realities of all persons; to mute 'liberalised
> democracies' world-wide,
>
> would that sort of plan get more W3C member support more immediately?  If
> so, could that more rapidly result, in an outcome that could consequently
> support 'freedom of thought', etc.
>
> thoughts?
>
> IMO; seems the 'evil' capability is far easier to fund / bring about; than
> the opposite.  got fairly nasty implications upon the value people place
> upon the future of kids, but given the medium - no one goes to prison right
> - so, it's all (economically) positive??? for a quarter, at least...?
>

The current model of the web is that of data silos, with centralized
agents, that live on central servers

So they act as gatekeepers to the read write web

The more decentralized approach would be to modularize the concept of a
server, as just an agent that acts on data, both reading and writing

The issue is that to date such agents have been largely impractical to keep
going

They dont live as long as the central server model.  Perhaps the most
famous agent is the googlebot and that also has central control points

Standards for the read write web could create a more temporal, long-lived
set of agents interacting with data in a more distributed way

It seems a better way that is more ethical, because it brings about more
choice, better ethics, and digital agents that can work for the user
without the motives to track the user or to profit form the user, or even
to sell them something

I think the web was designed to be distributed and multi agent, and we've
not seen that part of the web emerge yet

As with any new technology, there will be ethical considerations.  I would
like to bake in some kind of ethical code to agents, but that, in itself is
a complex proposition.  I've spoken to experts in moral psychology on this,
and it's hard to even define.  I would personally also like to see
autonomous agents that develop emergent properties of their own accord.
One part of this is that any web scale temporal component or witness, must
be unimpaired

The basic principle is that a decentralized web acts in conjunction with
the centralized web to offer the user more options and more choice.  This
diversity will allow ethical solutions to at least exist, and then,
hopefully thrive!


>
> timothy holborn.
>
> On Sat, 15 May 2021 at 00:17, Timothy Holborn <timothy.holborn@gmail.com>
> wrote:
>
>> I initially want to celebrate Melvin...  His minds output, is meaningful
>> upon so many...
>>
>> Nathan --> :) hope you're well :)
>>
>> otherwise - broadly, entirely supported...  but lots more 'thinking'
>> required...
>>
>> more later.
>>
>> Timothy Holborn.
>>
>> On Fri, 14 May 2021 at 23:58, Melvin Carvalho <melvincarvalho@gmail.com>
>> wrote:
>>
>>> For years in this group (tho less actively recently) we've been
>>> exploring ways to read and write to the web, in a standards based way
>>>
>>> The foundational principle was that the first browser was a
>>> browser/editor and that both reading and writing to the web should be
>>> possible, preferably using a standards-based approach
>>>
>>> Fundamentally writing is a more difficult problem then reading, because
>>> inevitably you want to be able to control who writes to what, in order to,
>>> preserve a degree of continuity
>>>
>>> This has lead to the concept of what I'll loosely call web access
>>> control (tho there's also the capability based approach) which in turn
>>> required work to be done on (web) identity, users, and groups
>>>
>>> The standards based approach to declarative data, with different agents
>>> operating on it, in a linked way, has started to take some shape, including
>>> with the solid project, and I think approximates to what timbl has branded,
>>> at various times, as web 3.0 / semantic web
>>>
>>> https://en.wikipedia.org/wiki/Semantic_Web#Web_3.0
>>>
>>> So far, so good
>>>
>>> However solid, and even the web, to a degree is something of an
>>> ephemeral web, rather than having both a temporal and an spacial aspect to
>>> it.  I suppose this was by design and in line with the so-called "principle
>>> of least power"
>>>
>>> The challenge with building multi agent systems on a semantic linked,
>>> access control web is that they lack robustness over time.  This makes them
>>> hard to compete with the centralized server.  You run an agent (for those
>>> few of us that have built them) and then they'll sit on your desktop, or a
>>> server, or if you can compile it, on your phone
>>>
>>> And interact with the web of linked data, but in a quite ephemeral way.
>>> Turn your machine off, and the agent is off, soon to be forgotten except
>>> for a missing piece of functionality.  People will forget where which agent
>>> was running, or what it does, and there's nothing to handle operation in
>>> time, leading to race conditions, lack of sync, race conditions,  and even
>>> network infinite loops
>>>
>>> While some temporal aspects are built into web standards, such as etags
>>> and memento, as well as various time vocabs, and VCS, I think we'll all
>>> agree that they are hard to work with.  And from my experience also lack
>>> robustness
>>>
>>> Timbl wrote a very interesting design note on this matter called Paper
>>> Trail
>>>
>>> https://www.w3.org/DesignIssues/PaperTrail
>>>
>>> In it he talks about the evolution of documents over time, through
>>> reading and writing, and how you can keep a paper trail of that.  I think
>>> it's quite a visionary work which anticipates things that came after it
>>> such as public block chains
>>>
>>> I think the paper trail concept is something that has yet to be fully
>>> (or perhapd even partially) realized
>>>
>>> Now (in 2021) public block chains are an established technology.  In
>>> particular they act as robust timestamp servers on the internet, which can
>>> provide a heart beat to web based systems, either sites, server, or, as
>>> described before agents that can then operate over time and have themselves
>>> anchored in external systems which can reasonably be expected to be around
>>> for at least several years.  The more unimpaired ones at least
>>>
>>> This enables agents to start to develop both over the web of data, but
>>> also evolve in time, at web scale.  Adding a quality of temporal
>>> robustness, to multi-agents systems that can operate in both time (history)
>>> and space (data), together with their own source code which can evolve too
>>>
>>> A functioning read-write web with properly functioning multi-agent
>>> systems seems to me to be an evolution of the (read-write) web, in line
>>> with the design principles that informed the original web.  ie
>>> universality, decentralization, modularity, simplicity, tolerance,
>>> principle of least power
>>>
>>> Since web 3.0 is branded as the semantic web a temporal RWW would seems
>>> to build on that, and it's what I'm loosely calling "web 4.0" for a
>>> backwards compatible web including semantic agents, that are time aware,
>>> and hence, robust enough to run real world applications, and interact with
>>> each other.  I got the idea for this term from neuroscientist and
>>> programmer Dr. Maxim Orlovsky who is also developing systems of multi-agent
>>> systems, within the "RGB" project.  It would seem to be a nice moniker, but
>>> I've cc'd timbl on this in case he disapproves (or approves!)
>>>
>>> I have started working on the first of these agents, and it is going
>>> very well.  Over time I will hopefully share libraries, frameworks, apps
>>> and documentation/specs that will show the exact way in which read-write
>>> agents can evolve in history
>>>
>>> My first system is what I call, "web scale version control" (thanks to
>>> Nathan for that term).  What it will do is, allow agents and systems to
>>> commit simultaneously to both a VCS and a block chain in order to create a
>>> robust "super commit" allowing a number of side-effects such as, auditable
>>> history, prevention of race conditions, reconstruction from genesis,
>>> continuously deployed evolution, ability to run autonomously and more.  In
>>> this way you can commit an agents code and state, without relying on a
>>> centralized service like github, can easily move, or restart on another VCS
>>> or server
>>>
>>> This can be used to create robust multi-agent systems that can run and
>>> operate against the RWW, as it exists today, thereby creating new
>>> functionality.  I'll be releasing code, docs, and apps over time and
>>> hopefully a framework so that others can easily create an agent in what
>>> will be (hopefully) a multi agent read write web
>>>
>>> If you've got this far, thanks for reading!
>>>
>>> If you have any thoughts, ideas or use-cases, or wish to collaborate,
>>> feel free to reply, or send a message off-list
>>>
>>> Best
>>> Melvin
>>>
>>> cc: timbl
>>>
>>

Received on Friday, 14 May 2021 21:57:29 UTC