Re: Relational inductive biases, deep learning, and graph networks

> I have very limited knowledge of ML, but it seems to me that they say that an
> RDF-like directed graph structure is conducive for next-generation ML
> approaches.

Kind of. RDF has not got directed graphs. It's a thing with nodes as edge
labels. You would need to figure out a different encoding based on what they
suggest that can represent the things that RDF calls graphs in an analogous way
that they do for normal graphs. An easy way to do it would be to reify
everything because that makes the graph structure simple, but then you lose the
graph structure which is the interesting bit.

> Does anyone have any ideas on what the implications could be for
> Linked Data and Knowledge Graphs?

In principle, maybe you could train such a beast on a bunch of data and
entailments under some expensive inference rules, and then have it generate new
facts in response to new data in a slightly error-prone but cheaper way. I
suspect you'd need to fix a vocabulary of terms at the outset though. It's often
hard to grow these models in an open-ended way because that means increasing the
dimensionality of the node/edge vector spaces and all of the matrices that are
so expensive to train for the neural network stuff. Changing that on the fly is
hard. 

> There is also an iterative algorithm given, which computes and updates
> either edge or node or whole graph attributes. I wonder if this could
> be implemented using SPARQL? Not necessarily efficiently, but as a
> proof of concept.

Do we have SPARQL on the GPU yet?

Cheers,
-w

Received on Thursday, 8 August 2019 10:20:50 UTC