Re: ChatGPT, ontologies and SPARQL

> On 30 Jan 2023, at 09:13, Nicolas Chauvat <nicolas.chauvat@logilab.fr> wrote:
> 
> Hi Xavier,
> 
> Le Tue, Jan 24, 2023 at 05:01:31PM +0100, Contact - Cogsonomy a écrit :
>> In short, is there a layer of reasoning or does the language model derive
>> from reasoning. And more while we're at it, doesn't our brain do the same?
> 
> As far as I understood, there is zero reasoning in larg language
> models, only statistics about the chaining of words (actually, parts
> of words, aka tokens).

Large language models (LLMs) can support chains of reasoning, aka “chain of thought” via reinforcement learning, e.g. Google AI’s Minerva which solves undergraduate science problems.  This relies on working memory (network layer activation vectors) to keep track of goals and sub-goals such as mathematical transformations and arithmetic. ChatGPT likewise can be shown to support plausible inferences, implications, reverse implications, analogies and more, along with supporting rationalisations.  Steven Darby’s 2022 Ph.D thesis reveals that lower to middle network layers in LLMs capture information about syntactic structure, whilst upper layers model topics and semantics relevant to what text is likely to come next. Further work is needed to better understand how LLM’s implement reasoning.

[1] https://pure.qub.ac.uk/en/studentTheses/interpretable-semantic-representations-from-neural-language-model

Dave Raggett <dsr@w3.org>

Received on Monday, 30 January 2023 15:51:09 UTC