Re: Relational inductive biases, deep learning, and graph networks

Thank you for this pointer Martynas

I am also not good beyond superficial ML, but one of my goals has been to
try to make sense of the logic behind them, that brought me  back to
ontology and KR

*The following is part position paper, part review, and part unification.
We argue that combinatorial generalization must be a top priority for AI to
achieve human-like abilities, and that structured representations and
computations are key to realizing this objective. *
(choir singing)

The graph network looks to me like an ontology. People like to give new
names to old concepts
to show that they are doing something new, but to me it looks that they
have just started
doing some homework and finally the penny dropped, as they say.

This looks like good news to me :-)

Research is still just in the realm of ideas. Its the praxis of bridging
fragmentation
where it matters the most (reasoning on the web for example) that I d like
to see some results

discuss?
https://www.mdpi.com/journal/systems/special_issues/Artificial_Intelligence_Knowledge_Representation


Pdm




On Thu, Aug 8, 2019 at 5:24 AM Martynas Jusevičius <martynas@atomgraph.com>
wrote:

> Hi,
>
> has anyone read at this paper? https://arxiv.org/abs/1806.01261
> Authors: DeepMind; Google Brain; MIT; University of Edinburgh
>
> I was surprised not to find any mentions of it in my inbox.
>
> The authors conclude:
>
> "[...] Here we explored flexible learning-based approaches which
> implement strong relational inductive biases to capitalize on
> explicitly structured representations and computations, and presented
> a framework called graph networks, which generalize and extend various
> recent approaches for neural networks applied to graphs. Graph
> networks are designed to promote building complex architectures using
> customizable graph-to-graph building blocks, and their relational
> inductive biases promote combinatorial generalization and improved
> sample efficiency over other standard machine learning building
> blocks. [...]"
>
> I have very limited knowledge of ML, but it seems to me that they say
> that an RDF-like directed graph structure is conducive for
> next-generation ML approaches.
>
> Does anyone have any ideas on what the implications could be for
> Linked Data and Knowledge Graphs?
>
> There is also an iterative algorithm given, which computes and updates
> either edge or node or whole graph attributes. I wonder if this could
> be implemented using SPARQL? Not necessarily efficiently, but as a
> proof of concept.
> For example, a program that walks all resources in an RDF graph and
> executes an INSERT/DELETE/WHERE for each of them (with some variable
> like ?this bound to current resource) to compute/update property
> values would be fairly easy to implement in Jena or RDF4J. But would
> it make any sense? :) Maybe something like this already exists?
>
> Martynas
> atomgraph.com
>
>

Received on Thursday, 8 August 2019 03:53:51 UTC