AI KR , foundation models explained (talking about slippery things

Okay, folks, I have been a bit AWOL, got lost in the dense forest of
understanding following the AI KR path
In related discussions, what are foundation models?

If you ask Google (exercise)  the answer points to FM in ML, starting with
Stanford in 2018 etc etc etc
https://hai.stanford.edu/news/what-foundation-model-explainer-non-experts
Great resources to be found online, all pointing to ML and nobody actually
showing you the FM
is in a tangible form (I remember this happened a lot with SW)
Apparently
that FM are actually not an actual thing, they are not there at all,
 they are like dynamic neural network architecture (no wonder they have
been slippery all along) which is built by ingesting
data on the internet

*Foundation models are massive neural network-based architectures designed
to process and generate human-like text. They are pre-trained on a
substantial corpus of text data from the internet, allowing them to learn
the intricacies of language, grammar, context, and patterns.*

They are made of layers, heads and parameters


Coming from systems engineering, you know, with a bit of an existential
background, I am making the case
that foundational models without ontological basis are actually the cause
of much risk in AI

In case you people were wondering what I am up to, and would like to
contribute to this work
Please pitch in

Paola

Received on Saturday, 8 June 2024 05:08:13 UTC