- From: Dave Raggett <dsr@w3.org>
- Date: Tue, 1 Nov 2022 09:22:16 +0000
- To: ProjectParadigm-ICT-Program <metadataportals@yahoo.com>
- Cc: W3C AIKR CG <public-aikr@w3.org>, Paola Di Maio <paoladimaio10@gmail.com>
- Message-Id: <EC17BFA7-9853-4C88-9D88-4BF6709873B8@w3.org>
> On 31 Oct 2022, at 22:21, ProjectParadigm-ICT-Program <metadataportals@yahoo.com> wrote: > > Let me start with drawing the background for my observations. Thanks for doing that. > https://www.zdnet.com/article/ai-true-goal-may-no-longer-be-intelligence/ <https://www.zdnet.com/article/ai-true-goal-may-no-longer-be-intelligence/> Which notes that the people whose job it is to apply today’s AI to business needs don't want to ask hard questions, they merely want things to run smoothly. No great surprise there, are they would be penalised for working on research rather than on business applications. LeCun said “I think it's entirely possible that we'll have Level 5 autonomous cars without common sense”, which I very much disagree with that, see the Bloomberg assessment: https://www.bloomberg.com/news/features/2022-10-06/even-after-100-billion-self-driving-cars-are-going-nowhere <https://www.bloomberg.com/news/features/2022-10-06/even-after-100-billion-self-driving-cars-are-going-nowhere> I believe that: Handcrafted knowledge doesn’t scale and is brittle when it comes to the unexpected Deep learning scales, but is also brittle, and requires huge datasets for training Humans are very good at generalising from few examples by seeking causal explanations based upon prior knowledge Humans are also good at reasoning using chains of plausible inferences, along with metacognition We need research focussed on extending artificial neural networks to support human-like learning and reasoning At the same time, we should also explore scalability of machine learning for symbolic representations of knowledge > I am well aware that RDF, predicate calculus and higher dimensional vector spaces provide forms of knowledge representation, but there are many more forms possible, and the problems is that so many different fields of research are converging on knowledge representation each with their own paradigms, theoretical models, that we have yet to arrive at a common ground and a way to find a way to be able to interchange these, and standardize these. I would avoid premature standardisation, and likewise pressure to agree on a common ground will discourage independent thinking. > knowledge representation that combines, natural language, artificial languages of mathematics and logic, semiotics, information theory, representation theory and formal theories of observation, perception, reasoning and decision making. That sounds like warm words and handwaving rather than a scientific argument. I think it is more productive to describe measurable aims in relation to perception, cognition and action, as ultimately, we care more about capabilities than about implementations. Is anyone on this list interested in concrete questions for knowledge engineering? As an example, I am currently working on reasoning over natural language semantics in relation to a person’s age. Dave Raggett <dsr@w3.org>
Received on Tuesday, 1 November 2022 09:22:21 UTC