Relational inductive biases, deep learning, and graph networks

Hi,

has anyone read at this paper? https://arxiv.org/abs/1806.01261
Authors: DeepMind; Google Brain; MIT; University of Edinburgh

I was surprised not to find any mentions of it in my inbox.

The authors conclude:

"[...] Here we explored flexible learning-based approaches which
implement strong relational inductive biases to capitalize on
explicitly structured representations and computations, and presented
a framework called graph networks, which generalize and extend various
recent approaches for neural networks applied to graphs. Graph
networks are designed to promote building complex architectures using
customizable graph-to-graph building blocks, and their relational
inductive biases promote combinatorial generalization and improved
sample efficiency over other standard machine learning building
blocks. [...]"

I have very limited knowledge of ML, but it seems to me that they say
that an RDF-like directed graph structure is conducive for
next-generation ML approaches.

Does anyone have any ideas on what the implications could be for
Linked Data and Knowledge Graphs?

There is also an iterative algorithm given, which computes and updates
either edge or node or whole graph attributes. I wonder if this could
be implemented using SPARQL? Not necessarily efficiently, but as a
proof of concept.
For example, a program that walks all resources in an RDF graph and
executes an INSERT/DELETE/WHERE for each of them (with some variable
like ?this bound to current resource) to compute/update property
values would be fairly easy to implement in Jena or RDF4J. But would
it make any sense? :) Maybe something like this already exists?

Martynas
atomgraph.com

Received on Wednesday, 7 August 2019 21:18:07 UTC