Re: Trip report on dagsuthl seminar Big Graph Data Processing

Some further comments:

You wrote:

> • How do we create mappings between different data models? 
> • Or should we create a dragon data model that rules them all, such that all data models can be mapped to the dragon data model? If so, what are all the abstract features that a data model should support? 

This corresponds to the pros and cons of using upper ontologies vs peer to peer mappings. The answer which is best depends on the context and which approach proves to be cheaper, more robust etc.

> • What is the formalism to represent mappings? Logic? Algebra? Category Theory? 

Do we need really need such formalisms?  An alternative is to see this as figuring out how to define mappings between graphs based upon the statistics of a set of training examples, e.g. as used by Google translate to map text in one human language to text in another language. Rather than manually developing mapping rules, we would instead focus on curation of examples and counter examples, and scoring mappings on a scale of good to bad.  Is this blend of graph+statistics in scope for the Semantic Web?

> • What are the properties that mappings should have? Information, Query and Semantics preserving, composability, etc.

I would emphasis machine learnability!

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Saturday, 14 December 2019 15:20:37 UTC