Re: ChatGPT, ontologies and SPARQL

Hello,

The same could be done by taking the dataset dump and putting it in a
database. And, applying a data2text interface layer over it with an ELK
gateway for data transparency. When the database does not have the info it
can spit out [insert script here]. Nothing different from what chatgpt does
which is quite a bit like a database of information. Nothing impressive and
alot similar to standard development.

Thanks,

Adeel



On Tue, 24 Jan 2023 at 09:19, Dave Raggett <dsr@w3.org> wrote:

> Dropping back to AIKR ...
>
> Scaling up language models has been shown to predictably improve
> performance and sample efficiency on a wide range of downstream tasks. This
> paper instead discusses an unpredictable phenomenon that we refer to as
> emergent abilities of large language models. We consider an ability to be
> emergent if it is not present in smaller models but is present in larger
> models. Thus, emergent abilities cannot be predicted simply by
> extrapolating the performance of smaller models. The existence of such
> emergence raises the question of whether additional scaling could
> potentially further expand the range of capabilities of language models.
>
>
> A deeper understanding of how ChatGPT is able to generate its results
> should allow us to devise smaller and more climate friendly systems.
> Practical applications don’t need the vast breadth of knowledge that
> ChatGPT got from scraping most of the web.
>
> A deeper understanding will also facilitate research on fixing major
> limitations of large language models, e.g. continuous learning, integration
> of explicit domain knowledge, metacognition, introspection and better
> explanations that cite provenance, etc.
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Tuesday, 24 January 2023 10:23:21 UTC