Re: Working draft (revisited)

I think I've found the way of performing functional / declarative C(R)UD by
means of query models resolution patterns (inspired by a previous blog post
in this list: "Paradigm shifts for the decentralized Web") for the approach
of 'decentralizing' model integrations I'm trying to build.

Sorry again for the lack of 'code'. I'm just trying to 'dump' what I'd like
to do as an analysis document while I gather the resources for building
something functional:

Basically the idea is to functionally 'homogenize' data sources: their
data, schemas and (inferred) 'behaviors' (flows / transforms / processes).
By means of semantic aggregation into layers and three 'alignment' models
of sources (identity of records / entities without common keys via class /
metaclass abstractions, resolution of missing relations / attributes and
contextual 'sorting': for example cause / effect in a process events
context) one should obtain:

1. An homogeneous (functional) metamodel of data and schema (Resources).
Like an XML document.

2. An homogeneous (functional) metamodel of entailed 'processes'
(transforms / flows) and the behaviors they entails (integration of, for
example, action / flow A in origin X entails action / flow B in origin Y).
Resembles an XSL transform.

The grace resides in that those metamodel being obtained, aggregated and
aligned from raw data (an RDF dump of a database, for example) and then to
parse simultaneously Template inferred 'code' applied to their
corresponding 'data' Resources and to obtain resulting metamodels.

So, aggregate and align diverse data sources in homogeneous structures
which provides Templates (code / flow / transforms) which applies over
Resources (data) when they have determinate structure or 'shapes'.

Apologizes for the quality of attached documents. Regards,


On Dec 21, 2017 7:38 PM, "Sebastian Samaruga" <> wrote:

I really apologize for my vague approach of trying to specify something.
Scarcity of resources and time prevents me from doing any code until now.

I'm basically trying to state that a XML/XSL like approach is viable also
for Semantic Web driven integration. I also emphasize the fact that
alignment models (incomplete in the document) via aggregation and inference
could allow for schema / identity / ontology matching, links and attributes
entailment and a 'contextual' order alignment of entities so it is possible
to 'compare' meaningful things, such as cause / effect relations.

I also think all this could be provided by means of simple / complex
aggregation of basic SPO RDF datasources, maybe the ones at hand providing
from any database or services. Updated link and document:

Sorry again. Although I recognize you are right, I don't have any means yet
as to start coding. Regards,


On Dec 21, 2017 1:35 AM, "Peter Brooks" <> wrote:

> The usefulness of coding it up is that it gives clarity. It's easy to
> posit things in a vaguely convincing (to yourself, to begin with) way,
> but coding the detail, even for an hypothetical example, exposes the
> bits you've not thought of, enabling clarity of thought.
> On 21 December 2017 at 00:59, Sebastian Samaruga <>
> wrote:
> > Hi all,
> >
> > I know I've been publishing a series of posts that maybe lacks of
> academic
> > formality or shape / coherence. My intentions are not to spam the lists
> with
> > content that maybe don't deserve the reading time of the subscribers.
> That's
> > why I warn everybody that I'm just a Semantic 'hobbyist' dumping
> thoughts in
> > a bottle.
> >
> > Feel free to skip reading my mesagges. Although sometimes I receive very
> > valuable feedback, sometimes people seems bothered with what I publish.
> >
> > My attempt now is finally to start writting code. I've added an appendix
> to
> > the document which explains details I didn't realized in the past and
> which
> > are fundamental for the deployment of an implementation (WIP, fuzzy
> draft).
> > I've also suffered of paralysis in an 'analysis' phase in which because
> of
> > lack knowledge and work responsabilities I couldn't code much.
> >
> > The idea is to write a very small functional API library for the
> processing
> > of streams (bus) of models of knowledge (resources) and their contexts
> > (templates) from which to apply aggregation and alignment / inference.
> Then,
> > given opportune APIs, provide implementations of connectors and
> 'deployers'
> > wrapper endpoints
> >
> > Again, this is a very fuzzy early draft of a document WIP which doesn't
> > qualify yet even as an analysis or specification document. But it's
> useful
> > for me to post it, even as a reminder of the work progress I've doing.
> >
> >
> ment.pdf?raw=true
> >
> > Thanks,
> > Sebastián.
> >
> --
> Peter Brooks
> Mobile: +27 82 717 6404
> Direct:  +27 21 447 9752
> Skype:  Fustbariclation
> Twitter: Fustbariclation
> Google+: Fustbariclation
> Author Page:

Received on Saturday, 23 December 2017 13:43:48 UTC