Re: Generative AI and incremental learning

Hello everyone,

Thank you, Mr. Ragget, for keeping essential topics beyond the trends,
where hypeness sometimes may obfuscate the needing of maturity and
understandability of their behind semantics.

In my understanding, the new lenses of Market are pointing out directly to
the billions, using afloat growthings as logos for new marketing, without
even reading the missing subtitles of the trama.

I'm currently working on a self-generative axiomatic system whose intention
is to learn based on questioning to itself, in a step-by-step inference
development. Although it is in the formalization phase, I'm aiming to start
with the smallest possible set of axiom, preferably one, such as if a
semantic-network could be built by a machine who discovers the number "1",
the possibility of "adding", and through questioning itself about the
possibility of adding "1" to another "1" ends up discovering the
possibility of "2", and thus, brick by brick, could visualize a network of
numbers, later translated onto concepts then semantics and words.

As an appreciator of your work, I'm looking forward to your new reports.

Em qui., 16 de mai. de 2024 às 12:37, Dave Raggett <dsr@w3.org> escreveu:

> Today’s generative AI is amazing and improving rapidly, but despite the
> hype it still faces major challenges:  a lack of incremental learning, a
> propensity for hallucinations (plausible guesses), it is easily distracted,
> and often weak on semantic consistency.
>
> It is fruitless to try to directly compete with the large well-funded AI
> market leaders such as Open AI. Much better is to work on research
> challenges they aren’t addressing.  That includes incremental learning and
> deliberative reasoning.
>
> Understanding and mimicking how humans learn incrementally is really tough
> and needs to be broken down into smaller more achievable challenges.  It
> looks like a more complex architecture will be required compared to that
> used for large language models (LLMs).
>
> One of the hurdles for this work is to identify a means to bootstrap
> learning, as the more you know the easier it is to learn new things. LLMs
> learn about everything all at the same time. Incremental learning requires
> a more structured approach, including the means to try things out. Basic
> numeracy and simple math look like a promising domain for this as it
> minimises the dependency on common sense knowledge.
>
> I am still thinking about some suitable initial steps. These could include
> work on single-shot learning to recognise sequences and to generalise
> across them, as well as preliminary work on sequential language processing
> and short term memory.
>
> To ground this a little, imagine watching a drama series with subtitles.
> If you step away and return minutes or even an hour or two later to a point
> before where you paused the video, you will recognise that you’ve already
> seen the subtitle text. Your ability to do so depends on the time interval
> and what you were doing in the interim. Your ability to generalise depends
> on grounding your understanding. This applies to learning a human language
> and likewise to learning basic math.
>
> I hope to be able to report further progress over the next few months.
>
> All the best,
>
> Dave Raggett <dsr@w3.org>
>
>
>
>

-- 
Gabriel Lopes
*Interoperability as Jam's sessions!*
*Each system emanating the music that crosses itself, instrumentalizing
scores and ranges...*
*... of Resonance, vibrations, information, data, symbols, ..., Notes.*

*How interoperable are we with the Music the World continuously offers to
our senses?*
*Maybe it depends on our foundations...?*

Received on Sunday, 19 May 2024 10:23:17 UTC