Re: ChatGPT, ontologies and SPARQL

Dropping back to AIKR ...

> Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks. This paper instead discusses an unpredictable phenomenon that we refer to as emergent abilities of large language models. We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence raises the question of whether additional scaling could potentially further expand the range of capabilities of language models.
> 

A deeper understanding of how ChatGPT is able to generate its results should allow us to devise smaller and more climate friendly systems.  Practical applications don’t need the vast breadth of knowledge that ChatGPT got from scraping most of the web.

A deeper understanding will also facilitate research on fixing major limitations of large language models, e.g. continuous learning, integration of explicit domain knowledge, metacognition, introspection and better explanations that cite provenance, etc.

Dave Raggett <dsr@w3.org>

Received on Tuesday, 24 January 2023 09:19:08 UTC