Re: Evolution of the RWW -- a Temporal Web -- Towards Web 4.0 (?)

On 5/17/21 4:59 PM, Nathan Rixham wrote:
> On Mon, May 17, 2021 at 8:52 PM Henry Story <henry.story@bblfish.net
> <mailto:henry.story@bblfish.net>> wrote:
>
>     > On 17. May 2021, at 20:53, Melvin Carvalho
>     <melvincarvalho@gmail.com <mailto:melvincarvalho@gmail.com>> wrote:
>     > On Mon, 17 May 2021 at 19:50, Henry Story
>     <henry.story@bblfish.net <mailto:henry.story@bblfish.net>> wrote:
>     > > On 17. May 2021, at 03:33, Nathan Rixham <nathan@webr3.org
>     <mailto:nathan@webr3.org>> wrote:
>     > >
>     > >
>     > > A loosely coupled immutable timestamped record of state
>     changes allows deployment of resources to be broadly tech and
>     protocol agnostic. For example several previous states of a
>     document could be stored on IPFS, with the stateless protocol HTTP
>     providing the most recent state, and a chain exposing timestamped
>     pointers to the previous states.
>     >
>     > You can also get quite far just by adding link between version
>     states using memento for example.
>     > I am trying that out in the new Solid Server I am putting
>     together. There are comments in the code here:
>     >
>     https://github.com/co-operating-systems/Reactive-SoLiD/blob/master/src/main/scala/run/cosy/ldp/fs/Resource.scala#L223
>     <https://github.com/co-operating-systems/Reactive-SoLiD/blob/master/src/main/scala/run/cosy/ldp/fs/Resource.scala#L223>
>     >
>     > Thanks Henry
>     >
>     > This looks interesting, I've had a look at the comment you
>     pointed to, and it looks interesting, tho I didnt understand it fully
>     >
>     > Would it be possible to describe in a couple of sentences for
>     those that are not intimately familiar with memento
>
>     I have not implemented memento fully myself. It builds on rfc5829
>     and allows one to link to archives that keep
>     copies of versions, though the server can be it’s own archive I
>     believe.
>
>     >
>     > I can imagine a work stream for a temporal web based on this
>     approach, if there's interest
>     >
>     > The thing that particularly interests me is what I'll term quite
>     vaguely, a web-scale temporal web.  What I mean by this is, that
>     the timestamp operation (aka the witness operation) is global
>     scope and not local.  Meaning if any one website goes down, the
>     timestamping record will still be there allowing a reconstruction
>     of the history.  Providing resilience.  In 2021, we have
>     specialized time stamping servers (commonly referred to as pubic
>     block chain) which can provide this time travel type functions. 
>     From your comment there is a time travel in memento too.  In any
>     case, interested in your findings, if you'd like to share ...
>
>     I think there is a clear need for any read-write web server such
>     as Solid to keep versioning information,
>     just to help restore versions in case a buggy hyper-App writes
>     some data.  That use case does not require
>     global consensus on version states. So I think that is local
>     versioning will clearly be the first
>     priority. Trellis-LDP implements somehting like this too, so I am
>     following in the footsteps to get a
>     better understanding of it.
>
>
> Agree, that's the trouble with using a stateless protocol to manage state!
>
> Exposing version via headers is a neat solution, I guess you can
> implement etag and if none match on write requests to avoid updating a
> resource for which you have a stale state / which has already changed
> without you knowing. That is probably good enough for most cases.
>
> I guess chains come in to factor when you need reliable timestamps,
> and a provably tamper proof immutable chain of previous states.


Yes!

Throughput (and associated economics) remain challenges (as far as I
know) since blockchains are still database management systems albeit a
specialized variety where transaction logs are based on various
consensus protocols.

Imagine trying to track the evolution of the description of various
entities in DBpedia since 2007, I don't know how it wouldn't hit all the
scalability thresholds re both throughput and $$$.


>
> It would be nice if various approaches worked together.


Yes, interoperability is vital.

-- 
Regards,

Kingsley Idehen       
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com
Weblogs (Blogs):
Company Blog: https://medium.com/openlink-software-blog
Virtuoso Blog: https://medium.com/virtuoso-blog
Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers

Personal Weblogs (Blogs):
Medium Blog: https://medium.com/@kidehen
Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
              http://kidehen.blogspot.com

Profile Pages:
Pinterest: https://www.pinterest.com/kidehen/
Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
Twitter: https://twitter.com/kidehen
Google+: https://plus.google.com/+KingsleyIdehen/about
LinkedIn: http://www.linkedin.com/in/kidehen

Web Identities (WebID):
Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
        : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this

Received on Tuesday, 18 May 2021 01:23:29 UTC