Re: AI KR , foundation models explained (talking about slippery things

What an amazing day for being out on such a discussion!

Thank you Paola and Dave for bringing such onto lights, in the middle of
all such hype.

During the harvest, sometimes the interest of most is going to converge
onto the next harvest, as this one is already achieved, while groups of
others will be more interested in using such a success to carve their name
on the ground. How many years did AI need to develop in order to evolve
from expert systems, semantic networks, artificial neurons,  knowledge
representation, ontologies,..., knowledge-graphs?

While Foundation Ontologies will be more interested in solid representation
of grounding knowledge, in robust axiomatic systems, sound, and corrects,
the terminology 'Foundation' has been recently transported onto other more
hyped areas, involving the symbolic language and incorporating deep
learning. As the mix of parameters, learning, and data is not something
easy to achieve, when it is, certainly such a model will be used towards
the construction of others, thus, serving them as a foundation. Even though
behind the scenes of every learning model there is the needing of a *somehow
robust *representation of the involved knowledge.

The most interesting part in this evolutionary trajectory, in my humble
opinion, is how the technology to be developed is more and more approaching
to representations of our perceptions about ourselves. Maybe in the 40s it
wasn't so clear for everyone how our mind was divided, maybe in a
Short-to-be-used Memory, a more durable one, and a unit of processing.
Maybe it was clear for those seekers who studied Freud or other sources of
knowledge about the Conscience, but, for sure, it wasn't absolutely not
clear how to transpose such a knowledge onto a mathematical system, then
towards materialization as hardware.

If we are living this hype of a millionaire parameter system trained to
remember what a cup looks like after having figured out the patterns of 10
thousands other cups, what will it be when a machine learns what a cup is
without seeing one?




Em sáb., 8 de jun. de 2024 às 02:08, Paola Di Maio <paola.dimaio@gmail.com>
escreveu:

> Okay, folks, I have been a bit AWOL, got lost in the dense forest of
> understanding following the AI KR path
> In related discussions, what are foundation models?
>
> If you ask Google (exercise)  the answer points to FM in ML, starting with
> Stanford in 2018 etc etc etc
> https://hai.stanford.edu/news/what-foundation-model-explainer-non-experts
> Great resources to be found online, all pointing to ML and nobody actually
> showing you the FM
> is in a tangible form (I remember this happened a lot with SW)
> Apparently
> that FM are actually not an actual thing, they are not there at all,
>  they are like dynamic neural network architecture (no wonder they have
> been slippery all along) which is built by ingesting
> data on the internet
>
> *Foundation models are massive neural network-based architectures designed
> to process and generate human-like text. They are pre-trained on a
> substantial corpus of text data from the internet, allowing them to learn
> the intricacies of language, grammar, context, and patterns.*
>
> They are made of layers, heads and parameters
>
>
> Coming from systems engineering, you know, with a bit of an existential
> background, I am making the case
> that foundational models without ontological basis are actually the cause
> of much risk in AI
>
> In case you people were wondering what I am up to, and would like to
> contribute to this work
> Please pitch in
>
> Paola
>


-- 
Gabriel Lopes
*Interoperability as Jam's sessions!*
*Each system emanating the music that crosses itself, instrumentalizing
scores and ranges...*
*... of Resonance, vibrations, information, data, symbols, ..., Notes.*

*How interoperable are we with the Music the World continuously offers to
our senses?*
*Maybe it depends on our foundations...?*

Received on Friday, 14 June 2024 03:48:34 UTC