Graph-based Transformers and Embedding Vectors

Semantic Web Interest Group,

Hello. I would like to share some ideas for discussion pertaining to graph-based transformers and embedding vectors. There is a survey and overview available on these topics [3].

The ideas involve that historical concepts and changes to them could be formally modeled and that graph-based models and changes to them could be transformed to embedding vectors.

For instance, one could: (1) formally model the concept of "air" from before the discovery of oxygen in 1772, (2) obtain an embedding vector for the historical concept, (3) update the model to reflect the discovery of oxygen, and (4) obtain an updated embedding vector for the updated historical concept.

I am in the midst of writing an article about artificial intelligence and intellectual history and am rather interested in these technology topics. I wonder what others here might think on these topics and whether any other publications might come to mind to recommend?


Best regards,
Adam Sobieski
http://www.phoster.com

[1] Kutuzov, Andrey, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. "Diachronic word embeddings and semantic shifts: A survey." arXiv preprint arXiv:1806.03537 (2018).

[2] Hofmann, Valentin, Janet B. Pierrehumbert, and Hinrich Schütze. "Dynamic contextualized word embeddings." arXiv preprint arXiv:2010.12684 (2020).

[3] Min, Erxue, Runfa Chen, Yatao Bian, Tingyang Xu, Kangfei Zhao, Wenbing Huang, Peilin Zhao, Junzhou Huang, Sophia Ananiadou, and Yu Rong. "Transformer for graphs: An overview from architecture perspective." arXiv preprint arXiv:2202.08455 (2022).

Received on Thursday, 7 September 2023 01:39:48 UTC