- From: Mike Bergman <mike@mkbergman.com>
- Date: Tue, 8 Nov 2022 12:56:39 -0600
- To: Dave Raggett <dsr@w3.org>
- Cc: W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <4b483319-9e45-758a-9914-908c106b01dd@mkbergman.com>
Hi Dave, The results of a pre-trained GPT-3 are indeed an index. While you are correct that during the training phase generative models such as these may approach Turing levels, once the vector and weights have been trained they act essentially as a look-up prediction value. They can be fine-tuned for specific tasks, but they still retain the brittleness and lack of causative or explainability that I mentioned before. In a review of this topic for Nature [1], the authors' summarized: "GPT-3 is not an artificial general intelligence. It will not, and cannot (for now at least), replace a human interaction that requires humanness. Although GPT-3 performed well on free-form conversation assessments demonstrating reading comprehension, it performed worst on a dataset meant to mimic the dynamic give-and-take of student-teacher interactions, and it also did not score well on multiple choice questions from middle and high school examinations. This makes sense because it does not “know” anything. One of the major limitations of GPT-3 is that it repeats itself semantically, loses coherence over long conversations, and contradicts itself. It would be unrealistic to consider GPT-3 as a stand-in for a healthcare provider or as a proxy in high-stakes interactions, such as a health emergency or an emotionally fraught interaction. There is compelling promise and serious hype in AI applications that generate natural language. Some of that promise is realistic. Routinizing tedious work for providers could productively improve their work satisfaction and reduce time interacting with computer systems, a well-documented concern. AI NLP applications could navigate complex electronic health record (EHR) systems, automate documentation with human review, prepare orders, or automate other routine tasks." I also note you refer to 'conventional logic'. That is important, because Peirce was a huge proponent of abductive logic (in addition to deductive and inductive), was the first formulator of a tri-valent logic and truth tables, and did not always accept the law of excluded middle. There are better logics and knowledge representations that should be considered in combination. Look, I'm not trying to diss GPT or generative models matched with artificial neural networks. Those are choices, and well performing ones in recent years for certain defined tasks or task scopes. To go back to my first comment on this thread, I was trying to make two points. First, AI is a subset of KR because it depends on the representational choices made for conducting the AI. Second, broader understandings of proper logics and representation as informed by Peirce offer new options for breaking current ceilings in AI applied to natural language. Thanks, Mike [1] Korngiebel, D.M., Mooney, S.D. Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. npj Digit. Med. 4, 93 (2021). https://www.nature.com/articles/s41746-021-00464-x On 11/8/2022 3:12 AM, Dave Raggett wrote: > Hi Mike, > >> On 7 Nov 2022, at 17:39, Mike Bergman <mike@mkbergman.com> wrote: >> >> When we do AI using something like GPT-3 we are making an active >> choice of how we will represent our knowledge to the computer. For >> GPT-3 and all massive data statistical models, *that choice limits us >> to indexes*. >> > That is not true as artificial neural networks are equivalent to > Turing machines in the sense of being able to do whatever computations > we design them to do including the ability to store, recall and > transform information in a vast variety of ways. > > A more interesting question is whether vector space representations > are better suited to dealing with imprecise and imperfect knowledge > than conventional symbolic logic. This is very likely to be the case > for systems designed to devise their own knowledge representations as > they learn from training materials. Emergent knowledge will often be > far from crisp until it matures, with the need to cast aside half > baked ideas in favour of ideas that fare better against the training > tasks. > > It has long been recognised that intuition often precedes analytical > progress in mathematics, see e.g. Henri Poincaré’s "Intuition and > Logic in mathematics" from 1905. It makes sense to work on techniques > to mimic human intuition and System 1 thinking as complementary to > deliberative, analytical System 2 thinking. You could think of logic > as the tip of a very large iceberg that is submerged below the surface > of the sea. > > > Dave Raggett <dsr@w3.org> > > > -- __________________________________________ Michael K. Bergman 319.621.5225 http://mkbergman.com http://www.linkedin.com/in/mkbergman __________________________________________
Received on Tuesday, 8 November 2022 18:56:57 UTC