- From: ProjectParadigm-ICT-Program <metadataportals@yahoo.com>
- Date: Sun, 31 Dec 2023 02:52:41 +0000 (UTC)
- To: Dave Raggett <dsr@w3.org>
- Cc: W3C AIKR CG <public-aikr@w3.org>, "paoladimaio10@googlemail.com" <paoladimaio10@googlemail.com>
- Message-ID: <1478637360.1551467.1703991161438@mail.yahoo.com>
Dave, You mentioned it being just a case of sufficient training data for agents to be able to deal with the issues I mentioned. You assume that all of this training data can and will be either scraped or given or made available of free will. This is an erroneous assumption because both the European Union AI Act and the increasing numbers of linguistic stakeholder representative groups challenging use of online data, because of cultural, national, linguistic and historical ownership of oral, textual, graphical and audiovisual data and information will make this training data increasingly bound to usage fees, thus de facto setting off a massive extinction of lesser spoken languages in technology. Big AI Tech is starting to resemble Big Pharma which favors only putting money into R&D that produces products for the biggest possible market segments with a one size fits all format. But there are some hopeful signs that this will be addressed. But we know from the current debate about English being predominant as language of both science and science publishing, that these linguistic issues will not be easily resolved. That is why large language models are likely to also fade away, in particular since their energy and environmental footprints (water usage for cooling e.g.) are now also being questioned. You cannot simply look at just the software models being used and the underlying mathematical modeling. The technology assessment of AI is just starting to get off the ground and will soon also enter the domain of discussion about regulation of AI. Milton Ponson GSM: +297 747 8280 PO Box 1154, Oranjestad Aruba, Dutch Caribbean Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development On Saturday, December 30, 2023 at 02:46:08 PM AST, Dave Raggett <dsr@w3.org> wrote: The good news is that the current concept of prompt engineering is likely to fade away as agents get better at understanding the context in which questions are asked and hence what kinds of responses will be most useful. I am at an early stage of an investigation into how to give cognitive agents a memory of the past, present and future, along with continual learning and explicit sequential deliberative reasoning. This will enable agents to adapt to individual users and tasks to be effective partners in human-machine collaboration. On Netflix, it is now commonplace to hear mixed language dialogues. Generative AI will no doubt soon be able to handle this effectively, as it is mainly just a matter of sufficient training data. One way to deal with hallucinations is using the proposer-critic pattern where one agent critiques the output of another. This would start with deliberative reasoning, and over time be “compiled” as the proposer critic learns from the feedback. On 30 Dec 2023, at 17:55, ProjectParadigm-ICT-Program <metadataportals@yahoo.com> wrote: I take issue with the term "prompt engineering"because it somehow implies creating a "well formed query"that "prompts" a "well formed input format" leading to an output within the range of scope and intention of the well formed query. But natural language is tricky, and as a polyglot I can assure you that you can make any chatbot hallucinate by language blending. I remember from my university days how as a mathematician I had conversations with philosophers and language students about this language blending, which is in short is combining common grammatical constructs but in one language switching to to idiomatic styles of another, change tonality and in some cases word order not unlike in poetry. Current literature on polyglots shows they have cognitive skills to better cope with bias and rational thinking. Unfortunately Big Internet and AI tech is monolinguistic and does not want to address these and other linguistic issues. Prompt engineering is what we would normally consider part of human computer interaction, and the vast body of scientific literature shows that between computational linguistics and generative AI using large language models lies a field of categories of statistical natural language modeling with inherent biases. We are still decades away from having a C3PO robot versed in all 7,000 plus human languages. Natural language is multiple contexts sensitive and IMHO current state of the art generative AI doesn't come close to dealing with this, hence the "prompt engineering" term is catchy but technically nonsense. Milton Ponson GSM: +297 747 8280 PO Box 1154, Oranjestad Aruba, Dutch Caribbean Project Paradigm: Bringing the ICT tools for sustainable development to all stakeholders worldwide through collaborative research on applied mathematics, advanced modeling, software and standards development On Saturday, December 30, 2023 at 05:16:20 AM AST, Paola Di Maio <paola.dimaio@gmail.com> wrote: We received fun intelligent (pseudo intelligent?) generative demos on this list (by Dave R) that show output, but did not describe the prompts. I asked about the prompt and received no reply(recursive empty prompt vector?) Prompt Engineering is a thing (but it is not new) Good article: https://www.zdnet.com/article/how-to-write-better-chatgpt-prompts/READ AND DISCUSS There is however new emphasis on Generative AI and Natural Language that moves the field on from SQL and the likeswhich is interesting and, dare I say, important I may be able to share some lecture notes Happiest possible year given the sodden circumstances the world is PDM Dave Raggett <dsr@w3.org>
Received on Sunday, 31 December 2023 02:52:53 UTC