Re: KR for Cogai/gentle reminder

Hello,

No, that is not true. KR is a subset of AI.

See Norvig book which is used in many foundational AI courses which teaches
KR is a subset of AI.

Norvig <https://zoo.cs.yale.edu/classes/cs470/materials/aima2010.pdf>

Thanks,

Adeel

On Sun, 30 Oct 2022 at 01:33, Paola Di Maio <paoladimaio10@gmail.com> wrote:

> Milton
> Please note that AI is a subset of KR  not viceversa
> Please also be reminded that I have often posted topics from other WC3
> lists
> where I spottend an overlap with KR (its all the archive)
> That said, if you would like to start by auditing all other CGs and WGs
> for KR
> relevant issues/problems that we could at least take into account here,
> that would be
> most welcome and most useful.
> If you do knowledge audit  for KR topic/open questions across W3C
> communities I will personally award you with a prize and even a plaque that
> you can hang on your wall
> Keeping in mind that things change all the time, you could limit by time
> frame
> (say in the last ten years or less?)
> PDM
>
> On Sun, Oct 30, 2022 at 2:57 AM ProjectParadigm-ICT-Program <
> metadataportals@yahoo.com> wrote:
>
>> I would like to point out that KR is one of the central themes for the
>> entire field commonly known as artificial intelligence.
>>
>> What is a Knowledge Representation?
>> A perspective from the MIT AI Lab, MIT AI Lab and Symbolics, Inc. and MIT
>> Lab for Computer Science
>> http://groups.csail.mit.edu/medg/people/psz/ftp/k-rep.html
>>
>> So what we are doing in the AIKR W3 CG is basically a SUBSET of every
>> other AI CG in the W3 Community Groups
>>
>> Now a basic tenet of scientific dialogue is the possibility to disagree
>> upon terminology, scope and findings, results and even theories.
>>
>> The biggest problem in AI today is that we cannot even agree upon what
>> actually is AI, what it should be and what are its main characteristics,
>> and unfortunately this also applies to knowledge representation.
>>
>> But because every field of scientific endeavor and engineering nowadays
>> utilizes AI, and every field has its own knowledge that needs formal
>> representation AIKR is at the core of all of this.
>>
>> I sense that the CogAI focuses of the cognitive processes involved in the
>> creation of knowledge and how to best capture this in formal
>> representation, based upon their description of objectives.
>>
>> So Paola is PARTIALLY right in trying to separate the work being done.
>>
>> But let's not waste the possible synergies to be gained. We could
>> TOGETHER produce deliverables (reports, articles) and the central role of
>> KR in AI, and how this relates to cognitive processes that are also central
>> to all AI.
>>
>> Let's define this common ground and define the possible common objectives
>> and potential deliverables. Because to quote the European Union, objectives
>> for open, inclusive, explainable and ethical AI also presuppose open ,
>> inclusive, explainable and ethical knowledge and consequently cognitive
>> processes and underlying architectures for such.
>>
>> I have tasked myself with providing an overview of what is AI, using a
>> timeline, with a concise summary of academic fields involved and how the EU
>> objectives can be achieved.
>>
>> Anyone willing to collaborate is welcome to contact me.
>>
>> I have a vested personal interest to utilize AI for the common good
>> defined in sustainable development guidelines of the UN as well, because AI
>> could be instrumental in tackling seemingly insurmountable problems like
>> climate change, and other global issues plaguing our modern world.
>>
>> Let's agree to be able to disagree, but not let it stand in our way to
>> collaborate.
>>
>> Milton Ponson
>> GSM: +297 747 8280
>> PO Box 1154, Oranjestad
>> Aruba, Dutch Caribbean
>> Project Paradigm: Bringing the ICT tools for sustainable development to
>> all stakeholders worldwide through collaborative research on applied
>> mathematics, advanced modeling, software and standards development
>>
>>
>> On Friday, October 28, 2022 at 11:28:23 PM AST, Adeel <
>> aahmad1811@gmail.com> wrote:
>>
>>
>> Hello,
>>
>> extract from the book:
>>
>> "
>>
>> Show that minimizing abnormality will work if we add the
>>
>> assertion
>>
>>
>> *All Québecois are abnormal Canadians,*
>>
>> but will not work if we only add
>>
>>
>>
>> *Québecois are typically abnormal Canadians.*
>>
>> "
>>
>>
>> That's harsh... LOL
>>
>>
>>
>>
>> On Sat, 29 Oct 2022 at 03:32, Adeel <aahmad1811@gmail.com> wrote:
>>
>> Hello,
>>
>> Perhaps, Paola is referring to the theory in this book -> Brachman and
>> Levesque
>> <https://www.cin.ufpe.br/~mtcfa/files/in1122/Knowledge%20Representation%20and%20Reasoning.pdf>
>>
>> Thanks,
>>
>> Adeel
>>
>> On Sat, 29 Oct 2022 at 03:06, Timothy Holborn <timothy.holborn@gmail.com>
>> wrote:
>>
>> Noted.
>>
>> https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning
>>
>> In terms of knowledge representation, for humanity, my thoughts have been
>> that it's about the ability for people to represent the evidence of a
>> circumstance in a court of law.  If solutions fail to support the ability
>> to be used in these circumstances, to successfully represent knowledge -
>> which can be relied upon in a court of law; a circumstance that should
>> never be wanted, but desirable to support peace.
>>
>> Then, I guess, I'd be confused about the purposeful definion; or the
>> useful purpose of any such tools being produced & it's relationship, by
>> design, to concepts like natural justice.
>>
>> https://en.wikipedia.org/wiki/Natural_justice
>>
>> Let me know if I am actually "off topic" per the intended design outcomes.
>>
>> Regards,
>>
>> Timothy Holborn.
>>
>> On Sat, 29 Oct 2022, 11:55 am Paola Di Maio, <paoladimaio10@gmail.com>
>> wrote:
>>
>>
>> Just as a reminder, this list is about sharing knowledge, research and
>> practice in AI KR, The intersection with KR and CogAI may also be relevant
>> here (and of interest to me)
>>
>> If people want to discuss CogAI not in relation to KR, please use the
>> CogAI CG list?
>> What I mean is that: if KR is not of interest/relevance to a post, then
>> why post here?
>>
>> What is KR, its relevance and limitations is a vast topic, written about
>> in many scholarly books, but also these books are not adequately covering
>> the topic, In that sense, the topic of KR itself, without further
>> qualification, is too vast to be discussed without narrowing it down to a
>> specific problem/question
>> KR in relation to CogAI has been the subject of study for many of us for
>> many years, and it is difficult to discuss/comprehend/relate to for those
>> who do not share the background. I do not think this list can fill the huge
>> gap left by academia, however there are great books freely available online
>> that give some introduction .
>> When it comes to the application of KR to new prototypes, we need to
>> understand what these prototypes are doing, why and how. Unfortunately NN
>> fall short of general intelligence and intellegibility for humans.
>>
>> Adeel, thank you for sharing the paper 40 years of Cognitive Architectures
>> I am not sure you were on the list back then, but I distributed the
>> resource as a working reference for this list and anyone interested in
>> February 2021, and have used the resource as the basis for my research on
>> the intersection AI KR/CogAI since
>> https://lists.w3.org/Archives/Public/public-aikr/2021Feb/0017.html
>>
>> Dave: the topics KR, AI, CogAI and consciousness, replicability,
>> reliability, and all the issues brought up in the many posts in this thread
>> and other thread are too vast
>> to be discussed meaningfully in a single thread
>>
>> May I encourage the breaking down of topics/issues making sure the
>> perspective and focus of KR (including its limitations) are not lost in
>> the long threads
>>
>> Thank you
>> (Chair hat on)
>>
>> On Fri, Oct 28, 2022 at 6:23 PM Adeel <aahmad1811@gmail.com> wrote:
>>
>> Hello,
>>
>> To start with might be useful to explore 'society of mind
>> <http://aurellem.org/society-of-mind/index.html>' and 'soar' as point of
>> extension.
>>
>> 40 years of cognitive architecture
>> <https://link.springer.com/content/pdf/10.1007/s10462-018-9646-y.pdf>
>>
>> Recently, Project Debater
>> <https://research.ibm.com/interactive/project-debater/> also came into
>> the scene. Although, not quite as rigorous in Cog or KR.
>>
>> Thanks,
>>
>> Adeel
>>
>> On Fri, 28 Oct 2022 at 02:05, Paola Di Maio <paoladimaio10@gmail.com>
>> wrote:
>>
>> Thank you all for contributing to the discussion
>>
>> the topic is too vast - Dave I am not worried if we aree or not agree,
>> the universe is big enough
>>
>> To start with I am concerned whether we are talking about the same thing
>> altogether. The expression human level intelligence is often used to
>> describe tneural networks, but that is quite ridiculous comparison. If the
>> neural network is supposed to mimic human level intelligence, then we
>> should be able to ask; how many fingers do humans have?
>> But this machine is not designed to answer questions, nor to have this
>> level of knowledge about the human anatomy. A neural network is not AI in
>> that sense
>> it fetches some images and mixes them without any understanding of what
>> they are
>> and the process of what images it has used, why and what rationale was
>> followed for the mixing is not even described, its probabilistic. go figure.
>>
>> Hay, I am not trying to diminish the greatness of the creative neural
>> network, it is great work and it is great fun. But a) it si not an artist.
>> it does not create something from scratch b) it is not intelligent really,
>> honestly,. try to have a conversation with a nn
>>
>> This is what KR does: it helps us to understand what things are and how
>> they work
>> It also helps us to understand if something is passed for what it is not
>> *(evaluation)
>> This is is why even neural network require KR, because without it, we don
>> know what it is supposed
>> to do, why and how and whether it does what it is supposed to do
>>
>> they still have a role to play in some computation
>>
>> * DR Knowledge representation in neural networks is not transparent, *
>> *PDM I d say that either is lacking or is completely random*
>>
>>
>> DR Neural networks definitely capture knowledge as is evidenced by their
>> capabilities, so I would disagree with you there.
>>
>>
>> PDM  capturing knowledge is not knowledge representation, in AI,
>> capturing knowledge is only one step, the categorization of knowledge is
>> necessary to the reasoning
>>
>>
>>
>>
>>
>>
>> *We are used to assessing human knowledge via examinations, and I don’t
>> see why we can’t adapt this to assessing artificial minds *
>> because assessments is very expensive, with varying degrees of
>> effectiveness, require skills and a process -  may not be feasible when AI
>> is embedded to test it/evaluate it
>>
>>
>> We will develop the assessment framework as we evolve and depend upon AI
>> systems. For instance, we would want to test a vision system to see if it
>> can robustly perceive its target environment in a wide variety of
>> conditions. We aren’t there yet for the vision systems in self-driving cars!
>>
>> Where I think we agree is that a level of transparency of reasoning is
>> needed for systems that make decisions that we want to rely on.  Cognitive
>> agents should be able to explain themselves in ways that make sense to
>> their users, for instance, a self-driving car braked suddenly when it
>> perceived a child to run out from behind a parked car.  We are less
>> interested in the pixel processing involved, and more interested in whether
>> the perception is robust, i.e. the car can reliably distinguish a real
>> child from a piece of newspaper blowing across the road where the newspaper
>> is showing a picture of a child.
>>
>> It would be a huge mistake to deploy AI when the assessment framework
>> isn’t sufficiently mature.
>>
>> Best regards,
>>
>> Dave Raggett <dsr@w3.org>
>>
>>
>>
>>

Received on Sunday, 30 October 2022 00:40:23 UTC