Re: tracking state changes in a temporal read-write web

On Sat, 22 May 2021 at 00:35, Martynas Jusevičius <martynas@atomgraph.com>
wrote:

> Hi all,
>
> Why do these discussions immediately explode in all possible
> directions?


It's not in 'all possible directions'; i never asked what you had for
breakfast ;)

notwithstanding an important point being made...

I guess, the question is - what should be 'in scope'?

Personally (whether it relates to deficit or otherwise); i see a work, i
was invovled with, being deployed upon, what appears to be, the entirety of
humanity and given both the pretext; and, the reality that part of my
family, the relatively wealthy part of my family, have a heritage in
pathology - here's a link:
https://www.youtube.com/watch?v=EjJzK2YrgQk&list=PL_voXEIX5Xhvo-4N-Wg7rFuG7JwY8AOHp
<https://www.youtube.com/watch?v=EjJzK2YrgQk&list=PL_voXEIX5Xhvo-4N-Wg7rFuG7JwY8AOHp>

but my digest opinion is - stuff is going on that's not all together
'healthy'...  IMO...

Today - we have 'information systems' (not knowledge systems) that can be
changed / updated / modified without any consequential notifications; to
stimulate particular responses upon persons, whilst a new technology
(namely credentials) gets rolled out as a 'human identity solution'
(strange, hyber-complex barcode?) - that'll act to restrict and/or define,
rules upon 'things' that are part of our biosphere, our natural world...

SO - how can we better simplify that problem?  temporally?

moral vs. financial vs. compliance?  IDK generally...  struggling with it,
personally...  I don't see how works help the human rights of kids, as was
the reason for sacrificing so much of 'life', by spending so much time -
learning how to w3c...  & linked 'things'...




> Temporal integrity, blockchain (agnostic), hashing... How
> about we figure out the very *very* basics first, and start building
> from that?
>
> I have a suggestion for a simple decentralized use case:
>
>     There are 2 agents running instances of the same application,
> where the instances are peers since the application includes both a
> server (with an RDF storage backend) and a client and can communicate
> both ways.
>     One of the agents accesses (dereferences) an RDF document on the
> peer application, and stores that data in its own application.
>
> And that's it, to begin with. The intention is that now the agent can
> cross-reference the new data with the rest of the data in its
> application, e.g. using SPARQL if the storage supports it.
> Authentication, authorization are of course also in this picture, but
> they are orthogonal, so for the sake of simplicity we can skip them
> for now.
>
> Is that too simplistic? Then please show me an RDF-based app that can
> do this out-of-the-box.
> After this, we can proceed to add more requirements to this scenario..


IMO: really useful, but why an (artificial) limitation placed on scope?

with kindness, at the heart of it (& easily ignored :),

Timothy Holborn

>
>
> Martynas
>
> On Fri, May 21, 2021 at 4:14 PM Kingsley Idehen <kidehen@openlinksw.com>
> wrote:
> >
> > On 5/21/21 7:34 AM, Melvin Carvalho wrote:
> >
> > this is the outline of a strategy to track state changes in a temporal
> read-write web
> >
> > by no means the only strategy, but an aim to generalize some of the
> recent discussions
> >
> > 1. Data as a declarative state machine
> >
> > The data can be considered as a declarative state machine, offering
> state transitions
> >
> > Simple case is one document, but it's useful to have multiple documents
> over a set of quads (linked data) or directory tree (file system)
> >
> > It seems standard practice to track this data using a hash function. So
> the first step would be to hash the document or tree or knowledge base into
> a chain of hashes.  Git or other VCS systems do this, similarly with single
> documents you could take a sha2, for example, and maintain a chain of
> hashes that way
> >
> > 2. Bootstrapping a timestamp server to witness hashes
> >
> > Robust global timestamp servers have existed for over a decade,
> popularized by the bitcoin project, often referred to as block chains,
> because data is tied to those timestamps in the form of 'blocks' of data.
> Users compete for space on those blocks based on an auction basis, as they
> are a finite resource, to make them spam resistant
> >
> > The chain of hashes described in (1) can be tracked on the blocks of the
> timestamp server, which tend to have a common transaction format.
> >
> > What is needed is hash1,hash2,hash3...hashn to be sequenced in time from
> a definitive start, or genesis.  That genesis can become an identifier for
> the chain of linked data which we wish to securely witness.
> >
> > Block chains typically follow a transaction in time from spent ->
> unspent.  The terminology is that of inputs and outputs.  This can be
> thought of as source and destination.
> >
> > The transactions are identified as cryptographic hashes, with an array
> of outputs.  In order for a timestamp server to track a chain of linked
> data, we need to construct a URI for the linked data hashes (hash1,2...n)
> and for the block chain transactions (tx1,2,...) with the first tx being a
> genesis identifier
> >
> > Gaps needed to fill:  create URIs for hash1,2,...n.  Create URIs for
> tx1,2...n
> >
> > 3. Two way links between state machines
> >
> > Two way links between those state machines ensure strong coupling
> between the two systems providing a bootstrap.  So from the case of the
> linked data, you need a pointer to the transaction URI.  And from the block
> chain you need a pointer to the hash URI.
> >
> > From a block chain there's a couple of ways to do this, one is the
> so-called OP_RETURN which allows you to embed data in at transaction.  The
> other is known as 'tweaking' a public key on order to add a hash
> (hash1,2...n in the web chain)
> >
> > Linking from linked data to a transaction, once you have a URI can be
> done in a number of ways.  But as linked data is designed to link to other
> URIs it's quite doable by putting it onto the data structure.  Another
> technique, for example in VCS is to put a link in the commit message, as
> commit messages are part of the chained tree
> >
> > 4. Ensuring Temporal Integrity
> >
> > Once (1), (2), (3) are in place.  Change can be made to the state
> machine, and new hashes generated.  With the example of git we can commit
> hashes to a file system, or a centralized server such as github
> >
> > But, If we want to commit at web scale, we can do so as follows:
> >
> > Firstly generate a hash of the new state.  Then move the transaction in
> the block chain along to point to this new state.  The transaction itself
> has PKI based ownership rights which have a variety of ways to manage and
> transfer ownership including so-called "multi sig" ownership where any N of
> a given M actors need to agree on a transition
> >
> > Finally, point the web chain back to this new transaction once it is
> confirmed
> >
> > This will progress the web chain in time and mirror it on the underlying
> time stamp server
> >
> > The resulting system creates a temporal read write web state machine
> anchored to the strong assurances of an underlying timestamp server
> >
> > This is a sketch outline of something that could be turned into a
> prototype or MVP, and also illustrating the gaps in technology that we
> need, namely to create two URI schemes, to hash web state, and describe
> state transitions, for data and for agents
> >
> > Appreciate this is a sketch outline right now, feedback welcome!
> >
> >
> > Great explanation!
> >
> > Challenge:
> >
> > Persisting this in a form that available for easy recall.
> >
> > Suggestions:
> >
> > 1. Documentation using RDF sentences in a document
> >
> > 2. A visual diagram to complement -- e.g., using http://draw.io
> >
> > Example:
> >
> > 1.
> http://www.openlinksw.com/data/turtle/general/knowledge-graph-manifestation-turtle-jsonld.html
> -- I constructed that for explaining Hypertext, Hyperdata, Hypermedia etc.,
> in relation to Knowledge Graphs; that all started from a draw.io diagram .
> >
> >
> > --
> > Regards,
> >
> > Kingsley Idehen
> > Founder & CEO
> > OpenLink Software
> > Home Page: http://www.openlinksw.com
> > Community Support: https://community.openlinksw.com
> > Weblogs (Blogs):
> > Company Blog: https://medium.com/openlink-software-blog
> > Virtuoso Blog: https://medium.com/virtuoso-blog
> > Data Access Drivers Blog:
> https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers
> >
> > Personal Weblogs (Blogs):
> > Medium Blog: https://medium.com/@kidehen
> > Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
> >               http://kidehen.blogspot.com
> >
> > Profile Pages:
> > Pinterest: https://www.pinterest.com/kidehen/
> > Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
> > Twitter: https://twitter.com/kidehen
> > Google+: https://plus.google.com/+KingsleyIdehen/about
> > LinkedIn: http://www.linkedin.com/in/kidehen
> >
> > Web Identities (WebID):
> > Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
> >         :
> http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this
> >
>
>

Received on Friday, 21 May 2021 14:53:33 UTC