Neural Networks and Interpreting Natural Language into Semantics

Semantic Web Interest Group,

Hello. I was brainstorming and had some ideas that I would like to share with respect to the potential applicability of sequence-to-sequence models to natural-language interpretation. Models could process input sequences of natural language into output sequences of virtual machine instructions with which to construct semantic representations.
We can envision a person, sitting in a room, processing an input natural-language sentence in an online manner, a word at a time, while creating output, on a piece of paper: a set of predicate calculus expressions, a semantic graph, or content from another knowledge representation. This process could be described as a cognitive task, a procedure, a workflow, or as a virtual machine with its own set of simpler procedures or instructions.
Neural networks could transform input sequences of natural language into output sequences of virtual machine instructions such that these instructions, if performed or executed, would result in the construction of sets of predicate calculus expressions, of semantic graphs, or of content in other knowledge representation formats.
In the event of linguistic ambiguity, a desirable feature would be that there would be ambiguity in output instructions such that multiple output sequences could be retrieved, these resulting, if performed or executed, in the construction of multiple resultant output semantic representations, one per interpretation, as expected.
Thank you. Any thoughts on these ideas or about any other ways that neural networks could interpret natural language into knowledge representation formats, e.g., knowledge graphs?

Best regards,
Adam Sobieski

Received on Friday, 8 April 2022 18:05:39 UTC