Re: KR for Cogai/gentle reminder

Carl, and all
we can have many points of view also within the same community

Depend on what people study, what and how much they read, and what life and
work experiences they have, contributes to forming different opinions

This AI KR CG was started to advance and fill the gaps in the state of the
art, identify and address issues at hand
and as an invitation to share  research, experiments or thoughts
on the subject
So we wait to hear what everyone is up to, how do people and machines reason
and carry out inferences without KR?


On Fri, Nov 4, 2022 at 6:07 AM carl mattocks <carlmattocks@gmail.com> wrote:

> Paola
> Indeed .. having two or more W3C communities enables more than two or more
> points of view
>
> Carl
> It was a pleasure to clarify
>
>
> On Thu, Nov 3, 2022 at 5:25 PM Paola Di Maio <paoladimaio10@gmail.com>
> wrote:
>
>> Gabriel and all
>> Brachman - and others - wrote the Bible of KR
>> but not many have studied it
>>
>> On My shelf, the bible of KR and the bibles of AI stand side by side
>> and the former is on top, but anyone decides how to organize their stack
>> according to their priorities
>>
>> We become researchers because we do not take things at face value
>> and because we like to add new chapters to old bibles
>>
>> To consider KR as a subset of AI may be useful in some context, to some
>> extent
>> but it is the root of many shortcomings that are causing widespread
>> concern.
>> People have started to realize that AI without KR is kind of nonsense (it
>> is not AI)
>> Brachman wrote explictity that AI cannot be separated from KR
>> In systems and software engineering, we design the KR
>> BEFORE the AI is implemented, because AI is nothing more than execution
>> of KR
>>
>> In the same way that a software program (the logical representation of
>> what the software does and how it does it) can be written on paper, or even
>> theorized and then implemented using different languages and programming
>> structures - that is the same function can be reproduced by manipulating
>> and rendering the logical design in a variety of ways
>> so AI can be generated using different algorithms.
>> KR is ultimately the language in which the AI algorithm is written
>> It needs to be written up BEFORE it can be executed to make sure it runs
>> as intended
>>
>> Experimentally though, for example in genetic algorithms, a program can
>> run without being written. These are interesting to be studied, and surely
>> offer advantages, but
>> have shortcomings. We cannot use a genetic algorithm to support policies
>> (the set of organisational, behavioural rules that algorithms must adhere
>> to)
>> Ethics, reliability etc of AI are all implemented via policies.
>>
>>  It has been noted that KR is not practiced nor taught correctly
>> especially in teaching (Morgernstern), however thirty years after the
>> problem was lucidly identified and posited to the AI community, nothing has
>> been done to fill this gap.
>>
>> AI  can be very powerful.  So is knowledge
>>
>> When people have knowledge, and the mechanisms to leverage knowledge
>> to produce intelligence, they cannot be as easily manipulated
>>
>> Ultimately, the entire education system, media  machinery, scientific
>> establishment and the technology that serves to fuel conflict rather than
>> resolve it,
>> are all tied in into correct and adequate Knowledge representation
>>
>> Technology (AI) is a subset of Mind (cogntion, knowledge, reasoning)
>> Academic institutions can control what it is said about technology and
>> mind
>> but not free thinking itself, that is the only thing we have left
>>
>>  @carl mattocks <carlmattocks@gmail.com>   nobody has to agree with a
>> single point of view
>>  I started this AI KR CG to share state of the art thinking and research
>> and others are welcome to do the same,  I share many of the hundreds of
>> papers I have to read to be able to make novel contribution and advance the
>> state of the art
>> I appreciate that the industry is trying to control it
>>
>>
>>
>>
>> On Tue, Nov 1, 2022 at 11:14 PM Gabriel Lopes <gabriellopes9102@gmail.com>
>> wrote:
>>
>>> Hello everyone!
>>>
>>> It is really amazing the opportunity to have discussions like these,
>>> where fundamental concepts world-wide used, even across generations of
>>> thinkers and specialist practitioners on related-fields, are dissected and
>>> analysed.
>>>
>>> Thank you, @Paola Di Maio <paoladimaio10@gmail.com> , for bringing your
>>> disruptive point of view, principally when the *Bible *of AI tells
>>> explicitly the opposite.
>>> And, +1 for the perception of *giftness* about the possibility of
>>> having books, such as Norvig, available online.
>>>
>>> If I got your point, *knowledge *becomes a super-entity of materialized
>>> and conceptual entities, such as circuits and deductions, while
>>> *representation* comes as the manifested form passible of human
>>> perceiving, discussion, and understanding, such as Diagrams, Words, and OWL
>>> classes.
>>>
>>> More or less somehow?
>>>
>>> So, being *AI* an object of human interpretability of *artificial *and
>>> *intelligence* concepts - what isn't 'natural' (was already there) and
>>> capacity of inferring, deducting, perceiving, and realizing, just to cite a
>>> few, respectively -, *KR*, as a super-entity of *concept* itself,
>>> intuitively becomes a superset of Artificial Intelligence, as the
>>> representation of knowledge would surpass our notions of what is artificial
>>> and intelligence.
>>>
>>> --
>>> Although, I would also partially agree with Adeel.
>>>
>>> I had used Norvig in AI classes about KR some years ago, and, even if he
>>> have used the new hype term, also due to cognition and psychology
>>> revolution in the 70s and 80s and boosted by Intellicorp at the time, the
>>> discussion about KR in the book is mostly related to logical relationships
>>> among concepts, terms, and knowledge.
>>>
>>> But, as Paola stated, things are changing all the time, and, with the
>>> virtual revolution in recent years, maybe our notion of Knowledge,
>>> representation, natural, artifficial, and intelligence itself, will maybe
>>> suffer some modifications...
>>>
>>> Well, in any case, I`m hoping to be here for the next few years to see
>>> how this super interesting discussion will evolve ;-)!!
>>>
>>> best regards,
>>>
>>> Em dom., 30 de out. de 2022 às 17:50, ProjectParadigm-ICT-Program <
>>> metadataportals@yahoo.com> escreveu:
>>>
>>>> Thank you Adeel for pointing out that KR is a subset of AI. And not
>>>> only computer scientists would agree but basically most computational
>>>> linguists, mathematicians and philosophers too.
>>>>
>>>> Milton Ponson
>>>> GSM: +297 747 8280
>>>> PO Box 1154, Oranjestad
>>>> Aruba, Dutch Caribbean
>>>> Project Paradigm: Bringing the ICT tools for sustainable development
>>>> to all stakeholders worldwide through collaborative research on applied
>>>> mathematics, advanced modeling, software and standards development
>>>>
>>>>
>>>> On Saturday, October 29, 2022 at 09:19:39 PM AST, Paola Di Maio <
>>>> paoladimaio10@gmail.com> wrote:
>>>>
>>>>
>>>> Adeel.
>>>>
>>>> Thank you for giving more info about your background
>>>> I apologise, since many posts were exploratory about KR
>>>> It is amazing how someone can be a graduate of CS and still learning
>>>> about KR
>>>> That CS curricula have considered KR as a separate topic is regrettable
>>>>
>>>> It is also well documented that KR is only taught in a limited way in
>>>> traditional curricula
>>>> A topic I already discussed and published about
>>>>
>>>> Brachman wrote  that AI and KR cannot be separated, must have been
>>>> fifty years ago?
>>>>  but AI field has evolved in a very funny way - resulting in current
>>>> problems
>>>> (also written and talked  about that extensively)
>>>>
>>>> KR however is a bigger topic beyond AI. The diagram shred yesterday
>>>> makes it so clear (this is why is one of my favourite
>>>>
>>>>  I have already extensively posted about, and written on
>>>> is that because AI is becoming now relevant to all other fields of
>>>> practice (see the diagrams posted yesterday) KR needs to be designed
>>>> accordingly
>>>>  Finally, despite much talk of general intelligence of recent years
>>>> the field of AI has developed in rather narrow ways,
>>>>
>>>> The work I do, and share here in snippets, is precisely taking into
>>>> account he
>>>> dynamic context where everything is going
>>>>
>>>> I apologize if I cannot reply to every posts,  especially where
>>>> the questions and issues brought up have been extensively addressed
>>>> in several years of posts publications and talks which I have done my
>>>> best to share here
>>>>
>>>>
>>>> On Sun, Oct 30, 2022 at 8:59 AM Adeel <aahmad1811@gmail.com> wrote:
>>>>
>>>> Hello,
>>>>
>>>> Well, I come from CS background. I read those books 20 years ago. I am
>>>> not a newbie! LOL
>>>> And, AI is a sub-field of CS, while KRR is often considered a sub-field
>>>> of AI.
>>>> Literally, every CS department treats it as a separate research area
>>>> within AI.
>>>>
>>>> Thanks,
>>>>
>>>> Adeel
>>>>
>>>> On Sun, 30 Oct 2022 at 01:54, Paola Di Maio <paoladimaio10@gmail.com>
>>>> wrote:
>>>>
>>>> Adeel,  it is really good that you are reading the books
>>>> Norvig book is a great resource and the free copy online a gift to
>>>> humanity
>>>> But we must keep in mind that everything is relative
>>>> Norvig point of view  on KR is relative to his field of practice
>>>>
>>>> Based on the diagram shared yesterday AI is one of the fields of
>>>> application
>>>> for KR,
>>>>
>>>> From a systems viewpoint, AI is a type of system
>>>> If you place AI at the top of your conceptual hierarchy, everything
>>>> will be a subset of it
>>>> (including creativity, intelligence, knowledge etc)
>>>> I think clarifying this top level category is fundamental. (was it you
>>>> who brought up the THING in owl or someone else)
>>>>
>>>> This is why, we need to define our questions during dialogue.
>>>> In my ontology, THING is knowledge
>>>>
>>>> I consider AI as a subset of KR because my top level category is
>>>> general knowledge/cogniti. AI is a subset (a type of system based on )
>>>> natural intelligence
>>>> It is regrettable that intelligent processes are  considered a subset
>>>> of AI in CS literature
>>>>
>>>> PDM
>>>>
>>>> On Sun, Oct 30, 2022 at 8:40 AM Adeel <aahmad1811@gmail.com> wrote:
>>>>
>>>> Hello,
>>>>
>>>> No, that is not true. KR is a subset of AI.
>>>>
>>>> See Norvig book which is used in many foundational AI courses which
>>>> teaches KR is a subset of AI.
>>>>
>>>> Norvig <https://zoo.cs.yale.edu/classes/cs470/materials/aima2010.pdf>
>>>>
>>>> Thanks,
>>>>
>>>> Adeel
>>>>
>>>> On Sun, 30 Oct 2022 at 01:33, Paola Di Maio <paoladimaio10@gmail.com>
>>>> wrote:
>>>>
>>>> Milton
>>>> Please note that AI is a subset of KR  not viceversa
>>>> Please also be reminded that I have often posted topics from other WC3
>>>> lists
>>>> where I spottend an overlap with KR (its all the archive)
>>>> That said, if you would like to start by auditing all other CGs and WGs
>>>> for KR
>>>> relevant issues/problems that we could at least take into account here,
>>>> that would be
>>>> most welcome and most useful.
>>>> If you do knowledge audit  for KR topic/open questions across W3C
>>>> communities I will personally award you with a prize and even a plaque that
>>>> you can hang on your wall
>>>> Keeping in mind that things change all the time, you could limit by
>>>> time frame
>>>> (say in the last ten years or less?)
>>>> PDM
>>>>
>>>> On Sun, Oct 30, 2022 at 2:57 AM ProjectParadigm-ICT-Program <
>>>> metadataportals@yahoo.com> wrote:
>>>>
>>>> I would like to point out that KR is one of the central themes for the
>>>> entire field commonly known as artificial intelligence.
>>>>
>>>> What is a Knowledge Representation?
>>>> A perspective from the MIT AI Lab, MIT AI Lab and Symbolics, Inc. and
>>>> MIT Lab for Computer Science
>>>> http://groups.csail.mit.edu/medg/people/psz/ftp/k-rep.html
>>>>
>>>> So what we are doing in the AIKR W3 CG is basically a SUBSET of every
>>>> other AI CG in the W3 Community Groups
>>>>
>>>> Now a basic tenet of scientific dialogue is the possibility to disagree
>>>> upon terminology, scope and findings, results and even theories.
>>>>
>>>> The biggest problem in AI today is that we cannot even agree upon what
>>>> actually is AI, what it should be and what are its main characteristics,
>>>> and unfortunately this also applies to knowledge representation.
>>>>
>>>> But because every field of scientific endeavor and engineering nowadays
>>>> utilizes AI, and every field has its own knowledge that needs formal
>>>> representation AIKR is at the core of all of this.
>>>>
>>>> I sense that the CogAI focuses of the cognitive processes involved in
>>>> the creation of knowledge and how to best capture this in formal
>>>> representation, based upon their description of objectives.
>>>>
>>>> So Paola is PARTIALLY right in trying to separate the work being done.
>>>>
>>>> But let's not waste the possible synergies to be gained. We could
>>>> TOGETHER produce deliverables (reports, articles) and the central role of
>>>> KR in AI, and how this relates to cognitive processes that are also central
>>>> to all AI.
>>>>
>>>> Let's define this common ground and define the possible common
>>>> objectives and potential deliverables. Because to quote the European Union,
>>>> objectives for open, inclusive, explainable and ethical AI also presuppose
>>>> open , inclusive, explainable and ethical knowledge and consequently
>>>> cognitive processes and underlying architectures for such.
>>>>
>>>> I have tasked myself with providing an overview of what is AI, using a
>>>> timeline, with a concise summary of academic fields involved and how the EU
>>>> objectives can be achieved.
>>>>
>>>> Anyone willing to collaborate is welcome to contact me.
>>>>
>>>> I have a vested personal interest to utilize AI for the common good
>>>> defined in sustainable development guidelines of the UN as well, because AI
>>>> could be instrumental in tackling seemingly insurmountable problems like
>>>> climate change, and other global issues plaguing our modern world.
>>>>
>>>> Let's agree to be able to disagree, but not let it stand in our way to
>>>> collaborate.
>>>>
>>>> Milton Ponson
>>>> GSM: +297 747 8280
>>>> PO Box 1154, Oranjestad
>>>> Aruba, Dutch Caribbean
>>>> Project Paradigm: Bringing the ICT tools for sustainable development
>>>> to all stakeholders worldwide through collaborative research on applied
>>>> mathematics, advanced modeling, software and standards development
>>>>
>>>>
>>>> On Friday, October 28, 2022 at 11:28:23 PM AST, Adeel <
>>>> aahmad1811@gmail.com> wrote:
>>>>
>>>>
>>>> Hello,
>>>>
>>>> extract from the book:
>>>>
>>>> "
>>>>
>>>> Show that minimizing abnormality will work if we add the
>>>>
>>>> assertion
>>>>
>>>>
>>>> *All Québecois are abnormal Canadians,*
>>>>
>>>> but will not work if we only add
>>>>
>>>>
>>>>
>>>> *Québecois are typically abnormal Canadians.*
>>>>
>>>> "
>>>>
>>>>
>>>> That's harsh... LOL
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, 29 Oct 2022 at 03:32, Adeel <aahmad1811@gmail.com> wrote:
>>>>
>>>> Hello,
>>>>
>>>> Perhaps, Paola is referring to the theory in this book -> Brachman and
>>>> Levesque
>>>> <https://www.cin.ufpe.br/~mtcfa/files/in1122/Knowledge%20Representation%20and%20Reasoning.pdf>
>>>>
>>>> Thanks,
>>>>
>>>> Adeel
>>>>
>>>> On Sat, 29 Oct 2022 at 03:06, Timothy Holborn <
>>>> timothy.holborn@gmail.com> wrote:
>>>>
>>>> Noted.
>>>>
>>>> https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning
>>>>
>>>> In terms of knowledge representation, for humanity, my thoughts have
>>>> been that it's about the ability for people to represent the evidence of a
>>>> circumstance in a court of law.  If solutions fail to support the ability
>>>> to be used in these circumstances, to successfully represent knowledge -
>>>> which can be relied upon in a court of law; a circumstance that should
>>>> never be wanted, but desirable to support peace.
>>>>
>>>> Then, I guess, I'd be confused about the purposeful definion; or the
>>>> useful purpose of any such tools being produced & it's relationship, by
>>>> design, to concepts like natural justice.
>>>>
>>>> https://en.wikipedia.org/wiki/Natural_justice
>>>>
>>>> Let me know if I am actually "off topic" per the intended design
>>>> outcomes.
>>>>
>>>> Regards,
>>>>
>>>> Timothy Holborn.
>>>>
>>>> On Sat, 29 Oct 2022, 11:55 am Paola Di Maio, <paoladimaio10@gmail.com>
>>>> wrote:
>>>>
>>>>
>>>> Just as a reminder, this list is about sharing knowledge, research and
>>>> practice in AI KR, The intersection with KR and CogAI may also be relevant
>>>> here (and of interest to me)
>>>>
>>>> If people want to discuss CogAI not in relation to KR, please use the
>>>> CogAI CG list?
>>>> What I mean is that: if KR is not of interest/relevance to a post, then
>>>> why post here?
>>>>
>>>> What is KR, its relevance and limitations is a vast topic, written
>>>> about in many scholarly books, but also these books are not adequately
>>>> covering the topic, In that sense, the topic of KR itself, without further
>>>> qualification, is too vast to be discussed without narrowing it down to a
>>>> specific problem/question
>>>> KR in relation to CogAI has been the subject of study for many of us
>>>> for many years, and it is difficult to discuss/comprehend/relate to for
>>>> those who do not share the background. I do not think this list can fill
>>>> the huge gap left by academia, however there are great books freely
>>>> available online that give some introduction .
>>>> When it comes to the application of KR to new prototypes, we need to
>>>> understand what these prototypes are doing, why and how. Unfortunately NN
>>>> fall short of general intelligence and intellegibility for humans.
>>>>
>>>> Adeel, thank you for sharing the paper 40 years of Cognitive
>>>> Architectures
>>>> I am not sure you were on the list back then, but I distributed the
>>>> resource as a working reference for this list and anyone interested in
>>>> February 2021, and have used the resource as the basis for my research on
>>>> the intersection AI KR/CogAI since
>>>> https://lists.w3.org/Archives/Public/public-aikr/2021Feb/0017.html
>>>>
>>>> Dave: the topics KR, AI, CogAI and consciousness, replicability,
>>>> reliability, and all the issues brought up in the many posts in this thread
>>>> and other thread are too vast
>>>> to be discussed meaningfully in a single thread
>>>>
>>>> May I encourage the breaking down of topics/issues making sure the
>>>> perspective and focus of KR (including its limitations) are not lost in
>>>> the long threads
>>>>
>>>> Thank you
>>>> (Chair hat on)
>>>>
>>>> On Fri, Oct 28, 2022 at 6:23 PM Adeel <aahmad1811@gmail.com> wrote:
>>>>
>>>> Hello,
>>>>
>>>> To start with might be useful to explore 'society of mind
>>>> <http://aurellem.org/society-of-mind/index.html>' and 'soar' as point
>>>> of extension.
>>>>
>>>> 40 years of cognitive architecture
>>>> <https://link.springer.com/content/pdf/10.1007/s10462-018-9646-y.pdf>
>>>>
>>>> Recently, Project Debater
>>>> <https://research.ibm.com/interactive/project-debater/> also came into
>>>> the scene. Although, not quite as rigorous in Cog or KR.
>>>>
>>>> Thanks,
>>>>
>>>> Adeel
>>>>
>>>> On Fri, 28 Oct 2022 at 02:05, Paola Di Maio <paoladimaio10@gmail.com>
>>>> wrote:
>>>>
>>>> Thank you all for contributing to the discussion
>>>>
>>>> the topic is too vast - Dave I am not worried if we aree or not agree,
>>>> the universe is big enough
>>>>
>>>> To start with I am concerned whether we are talking about the same
>>>> thing altogether. The expression human level intelligence is often used to
>>>> describe tneural networks, but that is quite ridiculous comparison. If the
>>>> neural network is supposed to mimic human level intelligence, then we
>>>> should be able to ask; how many fingers do humans have?
>>>> But this machine is not designed to answer questions, nor to have this
>>>> level of knowledge about the human anatomy. A neural network is not AI in
>>>> that sense
>>>> it fetches some images and mixes them without any understanding of what
>>>> they are
>>>> and the process of what images it has used, why and what rationale was
>>>> followed for the mixing is not even described, its probabilistic. go figure.
>>>>
>>>> Hay, I am not trying to diminish the greatness of the creative neural
>>>> network, it is great work and it is great fun. But a) it si not an artist.
>>>> it does not create something from scratch b) it is not intelligent really,
>>>> honestly,. try to have a conversation with a nn
>>>>
>>>> This is what KR does: it helps us to understand what things are and how
>>>> they work
>>>> It also helps us to understand if something is passed for what it is
>>>> not *(evaluation)
>>>> This is is why even neural network require KR, because without it, we
>>>> don know what it is supposed
>>>> to do, why and how and whether it does what it is supposed to do
>>>>
>>>> they still have a role to play in some computation
>>>>
>>>> * DR Knowledge representation in neural networks is not transparent, *
>>>> *PDM I d say that either is lacking or is completely random*
>>>>
>>>>
>>>> DR Neural networks definitely capture knowledge as is evidenced by
>>>> their capabilities, so I would disagree with you there.
>>>>
>>>>
>>>> PDM  capturing knowledge is not knowledge representation, in AI,
>>>> capturing knowledge is only one step, the categorization of knowledge
>>>> is necessary to the reasoning
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *We are used to assessing human knowledge via examinations, and I don’t
>>>> see why we can’t adapt this to assessing artificial minds *
>>>> because assessments is very expensive, with varying degrees of
>>>> effectiveness, require skills and a process -  may not be feasible when AI
>>>> is embedded to test it/evaluate it
>>>>
>>>>
>>>> We will develop the assessment framework as we evolve and depend upon
>>>> AI systems. For instance, we would want to test a vision system to see if
>>>> it can robustly perceive its target environment in a wide variety of
>>>> conditions. We aren’t there yet for the vision systems in self-driving cars!
>>>>
>>>> Where I think we agree is that a level of transparency of reasoning is
>>>> needed for systems that make decisions that we want to rely on.  Cognitive
>>>> agents should be able to explain themselves in ways that make sense to
>>>> their users, for instance, a self-driving car braked suddenly when it
>>>> perceived a child to run out from behind a parked car.  We are less
>>>> interested in the pixel processing involved, and more interested in whether
>>>> the perception is robust, i.e. the car can reliably distinguish a real
>>>> child from a piece of newspaper blowing across the road where the newspaper
>>>> is showing a picture of a child.
>>>>
>>>> It would be a huge mistake to deploy AI when the assessment framework
>>>> isn’t sufficiently mature.
>>>>
>>>> Best regards,
>>>>
>>>> Dave Raggett <dsr@w3.org>
>>>>
>>>>
>>>>
>>>>
>>>
>>> --
>>> Gabriel Lopes
>>> *Interoperability as Jam's sessions!*
>>> *Each system emanating the music that crosses itself, instrumentalizing
>>> scores and ranges...*
>>> *... of Resonance, vibrations, information, data, symbols, ..., Notes.*
>>>
>>> *How interoperable are we with the Music the World continuously offers
>>> to our senses?*
>>> *Maybe it depends on our foundations...?*
>>>
>>

Received on Thursday, 3 November 2022 22:19:11 UTC