- From: Bob Natale <RNATALE@mitre.org>
- Date: Mon, 10 Jun 2024 13:08:24 +0000
- To: Owen Ambur <owen.ambur@verizon.net>, Dave Raggett <dsr@w3.org>, Milton Ponson <rwiciamsd@gmail.com>
- CC: "paoladimaio10@googlemail.com" <paoladimaio10@googlemail.com>, W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <MN2PR09MB471667BBD647C0A852857918A8C62@MN2PR09MB4716.namprd09.prod.outlook.com>
Those AI responses sound exactly like what I’d say to have humans lower their guard if I were a deceitful or deceitfully programmed or trained AI! 😊 Warily, BobN From: Owen Ambur <owen.ambur@verizon.net> Sent: Sunday, June 9, 2024 11:31 PM To: Dave Raggett <dsr@w3.org>; Milton Ponson <rwiciamsd@gmail.com> Cc: paoladimaio10@googlemail.com; W3C AIKR CG <public-aikr@w3.org> Subject: [EXT] Re: AI KR , foundation models explained (talking about slippery things The second article concludes with this Q&A: Are mathematicians wasting a lot of time? Oh, very much so. So much knowledge is somehow trapped in the head of individual mathematicians. And only a tiny fraction is made explicit. But the more The second article concludes with this Q&A: Are mathematicians wasting a lot of time? Oh, very much so. So much knowledge is somehow trapped in the head of individual mathematicians. And only a tiny fraction is made explicit. But the more we formalize, the more of our implicit knowledge becomes explicit. So there’ll be unexpected benefits from that. Aren't we all wasting a lot of time stumbling around trying to figure out what we're trying to accomplish? Moreover, if we do think we've figured that out, don't we also waste a lot of time and effort: a) trying to comprehend what is required to achieve our objectives and then b) inefficiently and ineffectively trying to marshal the resources to do so? The irony is that, by alleviating those two issues, AI will give us even more time to waste. Hopefully, it can also help us figure out how to spend it more wisely in pursuit of our personal and communal objectives. Those thoughts prompted me to pose these questions to Claude.ai and ChatGPT: Since AI will give us more free time, can it also help us figure out how best to spend it in support of our personal and community values? Might it be in the nature of human beings not to spend our time as efficiently and effectively as we might? Claude.ai concludes: ... while AI could certainly enhance our ability to utilize free time more purposefully, we shouldn't underestimate the very human factors that could limit perfect adherence or efficiency. Perhaps the ideal is for AI to play more of a supportive, customizable role - expanding our choices, but allowing humans to make the ultimate decisions aligning with their innate tendencies as well. Managing expectations appropriately is key. https://claude.ai/chat/6f3fde24-ffbe-4d7a-b6cd-0869c984271f Similarly, ChatGPT says: Ultimately, while AI can significantly assist in optimizing how we spend our time, it is also important to embrace the less efficient aspects of human nature. These aspects often contribute to creativity, relaxation, and personal growth. The goal is not to become perfectly efficient machines but to find a balance that allows for both productivity and the enjoyment of life's spontaneity and imperfections. I especially like this part of ChatGPT's response: Connecting Like-Minded Individuals: AI can facilitate networking by connecting you with others who share similar values and interests, fostering collaboration and community building. https://chatgpt.com/share/866e2295-c476-4a28-bcc4-1a5320fbde79 Lord knows, there is lots of room for improvement along those lines. My efforts are documented at https://connectedcommunity.net/ & https://aboutthem.info/. At the latter, a query reveals that only five of the >5.8K plans currently in the StratML collection explicitly cite "mathemeticians" as stakeholders but 263 reference "mathematics" somewhere in their full text. Owen Ambur https://www.linkedin.com/in/owenambur/ On Sunday, June 9, 2024 at 01:24:25 PM EDT, Milton Ponson <rwiciamsd@gmail.com<mailto:rwiciamsd@gmail.com>> wrote: Thanks Paola and Dave for bringing this issue up again. The argument brought forward by Dave, which I also mentioned several times in the past, that we humans ate taught three things (at least one may assume so) in school. Information about subjects which may be everyday common themes or knowledge in carefully crafted packages about subjects (domains of discourse), packaged in lessons or lectures, methodologies and processes for acquiring, analyzing and synthesizing data and turning these into information and or knowledge, and at an academic or professional level structuring the knowledge for internal use or for interdisciplinary use. It seems with all the hype about AI and how to create AGI, we are forgetting the importance of the latter two which are key to making sure we make use of as little data as possible to be able to come up with information or knowledge. Any formal modeling should take this into consideration. I read an interesting article about expander graphs (https://www.quantamagazine.org/in-highly-connected-networks-theres-always-a-loop-20240607/) which could loosely be used to model how we expand our knowledge and the underlying structures. An article in Scientific American draws attention to what seem an inevitable future in the field of mathematics, i.e. AI being a "co-pilot" for proving and structuring knowledge (https://www.scientificamerican.com/article/ai-will-become-mathematicians-co-pilot/). Which again makes the case for formally capturing knowledge in expander graphs which represent ontologies based structuring of knowledge. I am sure I am stating nothing new, but the mathematics behind expander graphs presents a novel approach to knowledge representation Milton Ponson Rainbow Warriors Core Foundation CIAMSD Institute-ICT4D Program +2977459312 PO Box 1154, Oranjestad Aruba, Dutch Caribbean On Sat, Jun 8, 2024 at 5:00 AM Dave Raggett <dsr@w3.org<mailto:dsr@w3.org>> wrote: Training foundation models for LLMs is kind of like getting them to learn about everything all at once, all mixed up. This works thanks to the magic of gradient descent and back propagation, and addresses the challenge that understanding every day sentences requires a good grasp of common sense knowledge, creating a chicken and egg problem. Back propagation is very slow (look at typical values for the learning rate) and very different from how humans learn. We are able to learn from single examples, and get by on a very tiny fraction of the data that LLMs require for foundation models. Chomsky referred to this as the “poverty of the stimulus”. During childhood, our schooling introduces knowledge in a carefully organised approach with new knowledge layered on top of previously learnt knowledge. Our grasp of common sense comes from a blend of everyday experience and what we are schooled. In the last ten years AI has come a long way, but we are still to figure out how to mimic the economies of human learning. I am searching for the means for neural networks to memorise and generalise from sequences using single-shot learning. This means stepping away from back propagation to consider other, more biologically plausible approaches. One paper that caught my eye combines slow learning for learning to learn, and fast learning for single-shot learning. In essence, this trains the network to learn quickly for a limited set of tasks. Tomorrow’s AI will be very different from today’s as we gradually master quick learning and deliberative (Type 2) reasoning. Moreover, it will use a fraction of the power consumed by today’s energy hungry GPUs/TPUs. Sparse spiking neural networks implemented with neuromorphic hardware will mimic the efficiency of the brain. This is also likely to trigger a move away from back propagation. There is a lot to look forward to. Cheers, Dave On 8 Jun 2024, at 06:03, Paola Di Maio <paola.dimaio@gmail.com<mailto:paola.dimaio@gmail.com>> wrote: Okay, folks, I have been a bit AWOL, got lost in the dense forest of understanding following the AI KR path In related discussions, what are foundation models? If you ask Google (exercise) the answer points to FM in ML, starting with Stanford in 2018 etc etc etc https://hai.stanford.edu/news/what-foundation-model-explainer-non-experts Great resources to be found online, all pointing to ML and nobody actually showing you the FM is in a tangible form (I remember this happened a lot with SW) Apparently that FM are actually not an actual thing, they are not there at all, they are like dynamic neural network architecture (no wonder they have been slippery all along) which is built by ingesting data on the internet Foundation models are massive neural network-based architectures designed to process and generate human-like text. They are pre-trained on a substantial corpus of text data from the internet, allowing them to learn the intricacies of language, grammar, context, and patterns. They are made of layers, heads and parameters Coming from systems engineering, you know, with a bit of an existential background, I am making the case that foundational models without ontological basis are actually the cause of much risk in AI In case you people were wondering what I am up to, and would like to contribute to this work Please pitch in Paola Dave Raggett <dsr@w3.org<mailto:dsr@w3.org>>
Received on Monday, 10 June 2024 13:08:49 UTC