Re: Evolution of the RWW -- a Temporal Web -- Towards Web 4.0 (?)

Hi Marvin,

Melvin Carvalho wrote:
>
>
> On Sat, 15 May 2021 at 19:03, Miles Fidelman 
> <mfidelman@meetinghouse.net <mailto:mfidelman@meetinghouse.net>> wrote:
>
>     Does not IPFS pretty much create a read/write web?
>
>
> Yes, I would say that it does
>
> But with slightly different properties
>
> The web is more client server based, with HTTP URIs and all the 
> awesome scalability and network effect it provides
>
> IPFS is content addressable so potentially could last longer, assuming 
> someone is going to pin your file.  I've used IPFS alot especially in 
> the early days, but you may find it network and resource intensive, 
> and it struggled for me as I added more files, in a way that web 
> servers do not. There's also differences around headers and mime types 
> etc., or at least there were when I was using it alot
>
> IPFS files are by nature immutable so write once, but no subsequent 
> writes, unless you're writing to a directory structure
>
> It's an interesting approach, but it is still tiny compared to the 
> HTTP web, which we should also use as a build target
The question is still, where to put the files if not on a server. There 
are http-IPFS gateways, and various approaches to managing mutability 
(personally, I'm looking at Orbit-db right now).  And there are pinning 
services to provide archiving.

Another approach might be to build on either a DCVS (e.g., Git), or a 
replicated database (e.g. CouchDB).  As long as there's at least one 
repository left, the data remains good.  Or something like NNTP, with 
some big archives.

Something I've been thinking about a LOT.  (I tend to think of it as 
"what's the minimum infrastructure needed to 'publish to the cloud' - 
and can you build it all into a single file.  Fossil-scm comes pretty 
close.  A marriage between TiddlyWiki and PouchDB might as well.)


>
>     Well.. the ability to register a domain and set up a blog behind
>     it has
>     long been the equivalent of owning a printing press.  IPFS takes
>     us to
>     the point of not needing to host anything at all.  And citing IPFS
>     URIs
>     pretty much provides a paper trail.  Now cataloging, search,
>     persistence
>     - all the librarianship issues still need some work, but ...
>
>
> IPFS really does need a host, because you must pin the files, which is 
> a non trivial task.  The economies of scale of the HTTP just make the 
> cost negligible
>
> The Domain Name system is an achilles heel of the web tho.  These are 
> simply trade offs
>
> IPFS and paper trail is an interesting thought.  So there's two types 
> of data in this regard.  "Witness" data, and "Non-witness" data.  Some 
> of this is explained in this article [1].
>
> Basically, you want something that you can store as data, and fish 
> out.  IPFS is designed to do that.  Enter a hash and get back data.  
> But you also need the witness data, which is harder to do robustly.  
> This is where the new breeds of public timestamp servers, which 
> operate at web scale, really IMHO offer something that we've not had 
> before.  I appreciate I've not fully justified this claim, and it's a 
> bit hand wavy, but I'm in part laying the ground for further 
> demonstration, here.
>
> The witness data can track the evolution of something, and the actual 
> data need not live on the timestamp server, that is just the witness.  
> So I think you need two things to make a compelling evolutive multi 
> agent web eco system. Then store non witness data on the timestamp server
>
> Simple example:  doc v1 is on IPFS, and now we make doc v1.1.  How do 
> we do a paper trail.  Because V1 is immutable how do we know of the 
> existence of doc v1.1?  That's where you need something to witness the 
> evolution, and report it to those that have interest ...
>
> Hope that made a bit of sense! :)
>
> [1] 
> https://medium.com/@RubenSomsen/snarks-and-the-future-of-blockchains-55b82012452b
>
>
>     Miles Fidelman
>
>     Melvin Carvalho wrote:
>     > For years in this group (tho less actively recently) we've been
>     > exploring ways to read and write to the web, in a standards
>     based way
>     >
>     > The foundational principle was that the first browser was a
>     > browser/editor and that both reading and writing to the web
>     should be
>     > possible, preferably using a standards-based approach
>     >
>     > Fundamentally writing is a more difficult problem then reading,
>     > because inevitably you want to be able to control who writes to
>     what,
>     > in order to, preserve a degree of continuity
>     >
>     > This has lead to the concept of what I'll loosely call web access
>     > control (tho there's also the capability based approach) which
>     in turn
>     > required work to be done on (web) identity, users, and groups
>     >
>     > The standards based approach to declarative data, with different
>     > agents operating on it, in a linked way, has started to take some
>     > shape, including with the solid project, and I think
>     approximates to
>     > what timbl has branded, at various times, as web 3.0 / semantic web
>     >
>     > https://en.wikipedia.org/wiki/Semantic_Web#Web_3.0
>     >
>     > So far, so good
>     >
>     > However solid, and even the web, to a degree is something of an
>     > ephemeral web, rather than having both a temporal and an spacial
>     > aspect to it.  I suppose this was by design and in line with the
>     > so-called "principle of least power"
>     >
>     > The challenge with building multi agent systems on a semantic
>     linked,
>     > access control web is that they lack robustness over time.  This
>     makes
>     > them hard to compete with the centralized server.  You run an agent
>     > (for those few of us that have built them) and then they'll sit on
>     > your desktop, or a server, or if you can compile it, on your phone
>     >
>     > And interact with the web of linked data, but in a quite ephemeral
>     > way.  Turn your machine off, and the agent is off, soon to be
>     > forgotten except for a missing piece of functionality. People will
>     > forget where which agent was running, or what it does, and there's
>     > nothing to handle operation in time, leading to race conditions,
>     lack
>     > of sync, race conditions,  and even network infinite loops
>     >
>     > While some temporal aspects are built into web standards, such as
>     > etags and memento, as well as various time vocabs, and VCS, I think
>     > we'll all agree that they are hard to work with. And from my
>     > experience also lack robustness
>     >
>     > Timbl wrote a very interesting design note on this matter called
>     Paper
>     > Trail
>     >
>     > https://www.w3.org/DesignIssues/PaperTrail
>     >
>     > In it he talks about the evolution of documents over time, through
>     > reading and writing, and how you can keep a paper trail of that.  I
>     > think it's quite a visionary work which anticipates things that
>     came
>     > after it such as public block chains
>     >
>     > I think the paper trail concept is something that has yet to be
>     fully
>     > (or perhapd even partially) realized
>     >
>     > Now (in 2021) public block chains are an established
>     technology.  In
>     > particular they act as robust timestamp servers on the internet,
>     which
>     > can provide a heart beat to web based systems, either sites,
>     server,
>     > or, as described before agents that can then operate over time and
>     > have themselves anchored in external systems which can
>     reasonably be
>     > expected to be around for at least several years.  The more
>     unimpaired
>     > ones at least
>     >
>     > This enables agents to start to develop both over the web of
>     data, but
>     > also evolve in time, at web scale.  Adding a quality of temporal
>     > robustness, to multi-agents systems that can operate in both time
>     > (history) and space (data), together with their own source code
>     which
>     > can evolve too
>     >
>     > A functioning read-write web with properly functioning multi-agent
>     > systems seems to me to be an evolution of the (read-write) web, in
>     > line with the design principles that informed the original web.  ie
>     > universality, decentralization, modularity, simplicity, tolerance,
>     > principle of least power
>     >
>     > Since web 3.0 is branded as the semantic web a temporal RWW would
>     > seems to build on that, and it's what I'm loosely calling "web 4.0"
>     > for a backwards compatible web including semantic agents, that are
>     > time aware, and hence, robust enough to run real world
>     applications,
>     > and interact with each other. I got the idea for this term from
>     > neuroscientist and programmer Dr. Maxim Orlovsky who is also
>     > developing systems of multi-agent systems, within the "RGB"
>     project.
>     > It would seem to be a nice moniker, but I've cc'd timbl on this in
>     > case he disapproves (or approves!)
>     >
>     > I have started working on the first of these agents, and it is
>     going
>     > very well.  Over time I will hopefully share libraries, frameworks,
>     > apps and documentation/specs that will show the exact way in which
>     > read-write agents can evolve in history
>     >
>     > My first system is what I call, "web scale version control"
>     (thanks to
>     > Nathan for that term).  What it will do is, allow agents and
>     systems
>     > to commit simultaneously to both a VCS and a block chain in
>     order to
>     > create a robust "super commit" allowing a number of side-effects
>     such
>     > as, auditable history, prevention of race conditions,
>     reconstruction
>     > from genesis, continuously deployed evolution, ability to run
>     > autonomously and more.  In this way you can commit an agents
>     code and
>     > state, without relying on a centralized service like github, can
>     > easily move, or restart on another VCS or server
>     >
>     > This can be used to create robust multi-agent systems that can
>     run and
>     > operate against the RWW, as it exists today, thereby creating new
>     > functionality.  I'll be releasing code, docs, and apps over time
>     and
>     > hopefully a framework so that others can easily create an agent in
>     > what will be (hopefully) a multi agent read write web
>     >
>     > If you've got this far, thanks for reading!
>     >
>     > If you have any thoughts, ideas or use-cases, or wish to
>     collaborate,
>     > feel free to reply, or send a message off-list
>     >
>     > Best
>     > Melvin
>     >
>     > cc: timbl
>
>
>     -- 
>     In theory, there is no difference between theory and practice.
>     In practice, there is.  .... Yogi Berra
>
>     Theory is when you know everything but nothing works.
>     Practice is when everything works but no one knows why.
>     In our lab, theory and practice are combined:
>     nothing works and no one knows why.  ... unknown
>


-- 
In theory, there is no difference between theory and practice.
In practice, there is.  .... Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown

Received on Saturday, 15 May 2021 20:45:58 UTC