Re: Graph-based Transformers and Embedding Vectors

Semantic Web Interest Group,

For those interested in these topics, here is an excellent resource: http://rdf2vec.org/ !


Best regards,
Adam Sobieski
http://www.phoster.com

________________________________
From: Adam Sobieski <adamsobieski@hotmail.com>
Sent: Monday, September 18, 2023 8:00 AM
To: semantic-web@w3.org <semantic-web@w3.org>
Subject: Re: Graph-based Transformers and Embedding Vectors

Semantic Web Interest Group,

Also, towards processing large corpora of historical and contemporary texts – and those concepts in them – into knowledge graphs, here are: (1) a 2022 paper presenting a novel end-to-end multi-stage knowledge graph generation system from textual input (text-to-graph), and (2) a 2022 paper indicating how story comprehension by large language models can be enhanced with knowledge graphs.


Knowledge Graph Generation From Text (2022) (https://arxiv.org/abs/2211.10511)
Igor Melnyk, Pierre Dognin, Payel Das

In this work we propose a novel end-to-end multi-stage Knowledge Graph (KG) generation system from textual inputs, separating the overall process into two stages. The graph nodes are generated first using pretrained language model, followed by a simple edge construction head, enabling efficient KG extraction from the text. For each stage we consider several architectural choices that can be used depending on the available training resources. We evaluated the model on a recent WebNLG 2020 Challenge dataset, matching the state-of-the-art performance on text-to-RDF generation task, as well as on New York Times (NYT) and a large-scale TekGen datasets, showing strong overall performance, outperforming the existing baselines. We believe that the proposed system can serve as a viable KG construction alternative to the existing linearization or sampling-based graph generation approaches. Our code can be found at https://github.com/IBM/Grapher .

Enhanced Story Comprehension for Large Language Models through Dynamic Document-based Knowledge Graphs (2022) (https://ojs.aaai.org/index.php/AAAI/article/view/21286)
Berkeley R. Andrus, Yeganeh Nasiri, Shilong Cui, Benjamin Cullen, and Nancy Fulda

Large transformer-based language models have achieved incredible success at various tasks which require narrative comprehension, including story completion, answering questions about stories, and generating stories ex nihilo. However, due to the limitations of finite context windows, these language models struggle to produce or understand stories longer than several thousand tokens. In order to mitigate the document length limitations that come with finite context windows, we introduce a novel architecture that augments story processing with an external dynamic knowledge graph. In contrast to static commonsense knowledge graphs which hold information about the real world, these dynamic knowledge graphs reflect facts extracted from the story being processed. Our architecture uses these knowledge graphs to create information-rich prompts which better facilitate story comprehension than prompts composed only of story text. We apply our architecture to the tasks of question answering and story completion. To complement this line of research, we introduce two long-form question answering tasks, LF-SQuAD and LF-QUOREF, in which the document length exceeds the size of the language model’s context window, and introduce a story completion evaluation method that bypasses the stochastic nature of language model generation. We demonstrate broad improvement over typical prompt formulation methods for both question answering and story completion using GPT-2, GPT-3 and XLNet.


Best regards,
Adam Sobieski
http://www.phoster.com<http://www.phoster.com/>

________________________________
From: Adam Sobieski <adamsobieski@hotmail.com>
Sent: Saturday, September 16, 2023 4:00 AM
To: semantic-web@w3.org <semantic-web@w3.org>
Subject: Re: Graph-based Transformers and Embedding Vectors

Semantic Web Interest Group,

Introduction
As users can, today, provide descriptive text to obtain AI-generated images, users may soon be able to provide natural-language descriptions of concepts to obtain focal nodes occurring in semantic graphs.

In these regards, the following subtopics are relevant:

  1.  encoding focal nodes occurring in vanilla, fuzzy, or neutrosophic semantic graphs into embedding vectors
  2.  decoding embedding vectors back into focal nodes occurring in vanilla, fuzzy, or neutrosophic semantic graphs
  3.  exploring how focal nodes occurring in vanilla, fuzzy, or neutrosophic semantic graphs could be mapped to natural-language descriptions of concepts
  4.  exploring how natural-language descriptions of concepts could be mapped to focal nodes occurring in vanilla, fuzzy, or neutrosophic semantic graphs

Above, "natural-language description" is intended to mean one or more sentences of text describing a historical or contemporary concept, e.g.:

  1.  "air" as it was understood in England in the middle of the 18th century
  2.  "air" as it was understood shortly after the discovery of oxygen
  3.  "mass" in the Newtonian model
  4.  "mass" in the Einsteinian model

Multiple Scalars and/or Tuples on Nodes and Edges
Nodes and edges may, additionally, have relevance scores. These scores would be scalars between 0.0 and 1.0. Graphs with such relevance scores could be more readily visualized. As considered, focal nodes would have the maximum relevance score of 1.0 and visualizations would be initially centered on them. Placing relevance scores on nodes and edges might also be useful when generating natural language from semantic graphs.

Purposes for placing scalars or neutrosophic tuples on nodes and edges, then, include, but are not limited to:

  1.  indicating a node's or edge's degree of membership in a graph
  2.  indicating a node's or edge's degree of membership in another set of existing things or true relations
  3.  indicating a node's or edge's contextual relevance, e.g., for visualization or natural-language generation

These purposes are not mutually exclusive. Nodes and edges could have multiple scalars and/or neutrosophic tuples.

Design Constraints on Generative AI Outputs
Generative semantic graph design might involve hybrid AI systems utilizing rule-based constraints to constrain the outputs of generative AI systems. For example, semantic graphs could be constrained to be logically possible, plausible, or valid.

Semantics as a Modality
Might multimodal embedding spaces (see also: https://github.com/facebookresearch/multimodal) come to include semantics as a modality? Were it to, AI research could better explore bidirectional mappings between historical and contemporary language and semantic graphs. Also interesting for exploration are bidirectional mappings between the modalities of semantic graphs and visual imagery and imagination (see also: https://cs.stanford.edu/people/ranjaykrishna/sgrl/ , https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00325/full).

Conclusion
Thank you. Any comments, feedback, or suggestions with respect to these ideas?


Best regards,
Adam Sobieski
http://www.phoster.com


P.S.: Here are some more relevant publications:

Machine Learning on Graphs: A Model and Comprehensive Taxonomy (https://arxiv.org/abs/2005.03675)
Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, Kevin Murphy

There has been a surge of recent interest in learning representations for graph-structured data. Graph representation learning methods have generally fallen into three main categories, based on the availability of labeled data. The first, network embedding (such as shallow graph embedding or graph auto-encoders), focuses on learning unsupervised representations of relational structure. The second, graph regularized neural networks, leverages graphs to augment neural network losses with a regularization objective for semi-supervised learning. The third, graph neural networks, aims to learn differentiable functions over discrete topologies with arbitrary structure. However, despite the popularity of these areas there has been surprisingly little work on unifying the three paradigms. Here, we aim to bridge the gap between graph neural networks, network embedding and graph regularization models. We propose a comprehensive taxonomy of representation learning methods for graph-structured data, aiming to unify several disparate bodies of work. Specifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs (e.g. GraphSage, Graph Convolutional Networks, Graph Attention Networks), and unsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc) into a single consistent approach. To illustrate the generality of this approach, we fit over thirty existing methods into this framework. We believe that this unifying view both provides a solid foundation for understanding the intuition behind these methods, and enables future research in the area.

Dynamic Network Embedding Survey (https://arxiv.org/abs/2103.15447)
Guotong Xue, Ming Zhong, Jianxin Li, Jia Chen, Chengshuai Zhai, Ruochen Kong

Since many real world networks are evolving over time, such as social networks and user-item networks, there are increasing research efforts on dynamic network embedding in recent years. They learn node representations from a sequence of evolving graphs but not only the latest network, for preserving both structural and temporal information from the dynamic networks. Due to the lack of comprehensive investigation of them, we give a survey of dynamic network embedding in this paper. Our survey inspects the data model, representation learning technique, evaluation and application of current related works and derives common patterns from them. Specifically, we present two basic data models, namely, discrete model and continuous model for dynamic networks. Correspondingly, we summarize two major categories of dynamic network embedding techniques, namely, structural-first and temporal-first that are adopted by most related works. Then we build a taxonomy that refines the category hierarchy by typical learning models. The popular experimental data sets and applications are also summarized. Lastly, we have a discussion of several distinct research topics in dynamic network embedding.

Received on Saturday, 23 September 2023 06:23:10 UTC