- From: Melvin Carvalho <melvincarvalho@gmail.com>
- Date: Fri, 14 May 2021 15:57:25 +0200
- To: public-rww <public-rww@w3.org>
- Cc: Tim Berners-Lee <timbl@w3.org>, "nathan@webr3.org" <nathan@webr3.org>
- Message-ID: <CAKaEYh+5AzGLnzv68nsoz85zaVGYaP-3Hb+YJRD6JkcYjiEB+Q@mail.gmail.com>
For years in this group (tho less actively recently) we've been exploring ways to read and write to the web, in a standards based way The foundational principle was that the first browser was a browser/editor and that both reading and writing to the web should be possible, preferably using a standards-based approach Fundamentally writing is a more difficult problem then reading, because inevitably you want to be able to control who writes to what, in order to, preserve a degree of continuity This has lead to the concept of what I'll loosely call web access control (tho there's also the capability based approach) which in turn required work to be done on (web) identity, users, and groups The standards based approach to declarative data, with different agents operating on it, in a linked way, has started to take some shape, including with the solid project, and I think approximates to what timbl has branded, at various times, as web 3.0 / semantic web https://en.wikipedia.org/wiki/Semantic_Web#Web_3.0 So far, so good However solid, and even the web, to a degree is something of an ephemeral web, rather than having both a temporal and an spacial aspect to it. I suppose this was by design and in line with the so-called "principle of least power" The challenge with building multi agent systems on a semantic linked, access control web is that they lack robustness over time. This makes them hard to compete with the centralized server. You run an agent (for those few of us that have built them) and then they'll sit on your desktop, or a server, or if you can compile it, on your phone And interact with the web of linked data, but in a quite ephemeral way. Turn your machine off, and the agent is off, soon to be forgotten except for a missing piece of functionality. People will forget where which agent was running, or what it does, and there's nothing to handle operation in time, leading to race conditions, lack of sync, race conditions, and even network infinite loops While some temporal aspects are built into web standards, such as etags and memento, as well as various time vocabs, and VCS, I think we'll all agree that they are hard to work with. And from my experience also lack robustness Timbl wrote a very interesting design note on this matter called Paper Trail https://www.w3.org/DesignIssues/PaperTrail In it he talks about the evolution of documents over time, through reading and writing, and how you can keep a paper trail of that. I think it's quite a visionary work which anticipates things that came after it such as public block chains I think the paper trail concept is something that has yet to be fully (or perhapd even partially) realized Now (in 2021) public block chains are an established technology. In particular they act as robust timestamp servers on the internet, which can provide a heart beat to web based systems, either sites, server, or, as described before agents that can then operate over time and have themselves anchored in external systems which can reasonably be expected to be around for at least several years. The more unimpaired ones at least This enables agents to start to develop both over the web of data, but also evolve in time, at web scale. Adding a quality of temporal robustness, to multi-agents systems that can operate in both time (history) and space (data), together with their own source code which can evolve too A functioning read-write web with properly functioning multi-agent systems seems to me to be an evolution of the (read-write) web, in line with the design principles that informed the original web. ie universality, decentralization, modularity, simplicity, tolerance, principle of least power Since web 3.0 is branded as the semantic web a temporal RWW would seems to build on that, and it's what I'm loosely calling "web 4.0" for a backwards compatible web including semantic agents, that are time aware, and hence, robust enough to run real world applications, and interact with each other. I got the idea for this term from neuroscientist and programmer Dr. Maxim Orlovsky who is also developing systems of multi-agent systems, within the "RGB" project. It would seem to be a nice moniker, but I've cc'd timbl on this in case he disapproves (or approves!) I have started working on the first of these agents, and it is going very well. Over time I will hopefully share libraries, frameworks, apps and documentation/specs that will show the exact way in which read-write agents can evolve in history My first system is what I call, "web scale version control" (thanks to Nathan for that term). What it will do is, allow agents and systems to commit simultaneously to both a VCS and a block chain in order to create a robust "super commit" allowing a number of side-effects such as, auditable history, prevention of race conditions, reconstruction from genesis, continuously deployed evolution, ability to run autonomously and more. In this way you can commit an agents code and state, without relying on a centralized service like github, can easily move, or restart on another VCS or server This can be used to create robust multi-agent systems that can run and operate against the RWW, as it exists today, thereby creating new functionality. I'll be releasing code, docs, and apps over time and hopefully a framework so that others can easily create an agent in what will be (hopefully) a multi agent read write web If you've got this far, thanks for reading! If you have any thoughts, ideas or use-cases, or wish to collaborate, feel free to reply, or send a message off-list Best Melvin cc: timbl
Received on Friday, 14 May 2021 13:57:50 UTC