Re: Relational inductive biases, deep learning, and graph networks

Category theory can provide the tools to define the graph networks, and in fact BICA (biologically inspired cognitive architectures) research programs around the world are already using such tools.
There is a growing body of scientific literature using BICA, cognitive science and neuroscience, exploring category theory to do just that.
Milton Ponson
GSM: +297 747 8280
PO Box 1154, Oranjestad
Aruba, Dutch Caribbean
Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development 

    On Thursday, August 8, 2019, 07:25:40 AM ADT, Dave Raggett <dsr@w3.org> wrote:  
 
 Hi Martynas,
Thanks for the pointer.  Comments below.


On 7 Aug 2019, at 22:17, Martynas Jusevičius <martynas@atomgraph.com> wrote:
Hi,

has anyone read at this paper? https://arxiv.org/abs/1806.01261
Authors: DeepMind; Google Brain; MIT; University of Edinburgh

I was surprised not to find any mentions of it in my inbox.

The authors conclude:

"[...] Here we explored flexible learning-based approaches which
implement strong relational inductive biases to capitalize on
explicitly structured representations and computations, and presented
a framework called graph networks, which generalize and extend various
recent approaches for neural networks applied to graphs. Graph
networks are designed to promote building complex architectures using
customizable graph-to-graph building blocks, and their relational
inductive biases promote combinatorial generalization and improved
sample efficiency over other standard machine learning building
blocks. [...]"

I have very limited knowledge of ML, but it seems to me that they say
that an RDF-like directed graph structure is conducive for
next-generation ML approaches.

Does anyone have any ideas on what the implications could be for
Linked Data and Knowledge Graphs?


There is a lot we can learn from Cognitive Psychology and Neuroscience in respect to requirements and architecture. To give an example, the hippocampus supports short term memory whilst the cortex focuses on long term memory. You need detailed information of the recent past,  but when it comes to inductive learning in the presence of noise, you don’t want to most recent events to unduly bias learning from past events.
Another example concerns the role of rules and graphs. The basal ganglia and thalamus are widely connected to different parts of the cortex etc. and act as a rule engine transforming inputs to outputs that query and update memories, and invoke motor actions via delegation to the cerebellum. The rules don’t act directly on the cortex, and instead send queries / updates, and act on the responses.
This suggests that we need production rule languages that behave similarly, with rule actions invoking queries / updates in potentially remote graph databases, with the responses used to match rule conditions.  For efficiency in dealing with large datasets, graph algorithms (including graph queries) are executed locally with the graph database. Moreover, declarative descriptions of behaviour are over time compiled into procedural descriptions with dramatic speed ups. This suggests the use of graphs for describing rules as a means to facilitate such adaptation.
Machine learning is needed to scale up to large vocabularies and rulesets that would be impractical to maintain manually, given the inevitable evolution of requirements as a consequence of constantly changing business conditions. This is likely to require a synthesis of symbolic approaches with computational statistics, where we can draw upon decades of work in Cognitive Science and related disciplines.
Dave Raggett <dsr@w3.org> http://www.w3.org/People/RaggettW3C Data Activity Lead & W3C champion for the Web of things 





  

Received on Thursday, 8 August 2019 17:27:48 UTC