Re: definitions, problem spaces, methods

Thanks Dave
and yes Adeel, not just explainability, but reliability, replicability etc
etc

the only solution I know is metadata/classes  (tagging the datasets with
symbolic KR categories)
BTW, I think I may need to correct my statement about the lack of KR in the
case of stable diffusion

if the machine uses a prompt (say feet, mushroom, girl, boy) and outputs
something that reflects the prompt
then even stable diffusion must use some metadata or some way of matching
the prompt, which is natural language expression, to the images it fetches,

 I d say we could study these examples, and figure something out

I cannot play with these systems now, but if someone does, please share the
results.

I  have updated the table
<https://docs.google.com/document/d/1lYn8-YUvIS_k1rezgqdR0DX2BLTRYpvkJpmvYjAEpng/edit?usp=sharing>
that needs filling out also added a column where people can link
to experiments (sets of prompts/results) we can use the wiki to create a
page for each experiment
for any of the systems mentioned, but we can think of other systems as well
they can be added to table obviously




On Mon, Nov 7, 2022 at 9:05 PM Adeel <aahmad1811@gmail.com> wrote:

> Hello,
>
> But, none of those models have explainability.
> So, cannot explain precisely how they are reaching those conclusions and
> decisions because they are essentially working in a black-box?
>
> Thanks,
>
> Adeel
>
> On Mon, 7 Nov 2022 at 12:58, Dave Raggett <dsr@w3.org> wrote:
>
>> GPT3, BLOOM as examples of large language models
>>
>> DALLE-E, Stable Diffusion as examples of text to image
>>
>> AlphaFold for predicting 3D protein structures
>>
>> These all embed knowledge obtained from deep learning  against large
>> corpora. The models combine the networks and their trained connection
>> parameters, e.g. BLOOM has 176 billion parameters and DALL-E 2 has around
>> 3.5 billion. This approach discovers its own (distributed) knowledge
>> representation and scales much better than hand-authored KR. However like
>>  hand-authored KR, it is still brittle when it comes to generalising beyond
>> its training data, something that humans are inherently better at.  Deep
>> learning suffers from a lack of transparency, and there has been quite a
>> bit of work trying to improve on that, e.g. showing which parts of an image
>> were most important when it came to recognising an object. One big
>> potential advantage is in handling imprecise context dependent knowledge,
>> where hand authored KR (e.g. RDF) has a hard time. There is a lot of
>> current effort on graph embeddings as a synthesis of neural networks and
>> symbolic graphs. However, these are still far from being able to model
>> human reasoning with chains of plausible inferences and metacognition
>> (reasoning about reasoning).
>>
>> On 7 Nov 2022, at 10:59, Paola Di Maio <paola.dimaio@gmail.com> wrote:
>>
>> Dave perhaps you could post a few examples of non symbolic KR so that we
>> can get our heads around
>> such a thing-
>> Please note that my postulate shared on this list
>> https://lists.w3.org/Archives/Public/public-aikr/2019Aug/0045.html
>> states that
>>
>> To support AI explainability, learnability,verifiability and
>> reproducibility, it is postulated that
>> for each MLA *machine learning algorithm,
>> there should correspond a natural language expression or other type of
>> symbolic knowledge representation
>>
>>
>> https://figshare.com/articles/poster/A_New_Postulate_for_Knowledge_Representation_in_AI/9730268/2
>>
>> was also slightly reworded in different presentations
>>
>> On Mon, Nov 7, 2022 at 5:45 PM Dave Raggett <dsr@w3.org> wrote:
>>
>>> The statement *“We can only pursue artificial intelligence via symbolic
>>> means” *is false, since artificial neural networks eschew symbols, and
>>> have been at the forefront of recent advances in AI.  I therefore prefer
>>> the Wikipedia definition of KR which is less restrictive:
>>>
>>> “Knowledge representation and reasoning (KRR, KR&R, KR) is the field of
>>> artificial intelligence (AI) dedicated to representing information about
>>> the world in a form that a computer system can use to solve complex tasks”
>>>
>>>
>>> See:
>>> https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning
>>>
>>> On 7 Nov 2022, at 03:03, Mike Bergman <mike@mkbergman.com> wrote:
>>>
>>> Hi All,
>>>
>>> It is always useful to have a shared understanding within a community
>>> for what defines its interests and why they have shared interests as a
>>> community. I applaud putting these questions out there. Like all W3C
>>> community groups, we have both committed students and occasional grazers.
>>> One can generally gauge usefulness of a given topic in a given group by the
>>> range of respondents to a given topic. Persistence seems to be more a
>>> function of specific interlocuters not letting go rather than usefulness.
>>>
>>> After researching what became a book to consider the matter, I came to
>>> the opinion that AI is a subset of KR [1]. The conclusion of that
>>> investigation was:
>>>
>>> "However, when considered, mainly using prescission, it becomes clear
>>> that KR
>>> can exist without artificial intelligence, but AI requires knowledge
>>> representation.
>>> * We can only pursue artificial intelligence via symbolic means*, and
>>> KR is the transla -
>>> tion of information into a symbolic form to instruct a computer. Even if
>>> the com-
>>> puter learns on its own, we represent that information in symbolic KR
>>> form. This
>>> changed premise for the role of KR now enables us to think, perhaps, in
>>> broader
>>> terms, such as including the ideas of instinct and kinesthetics in the
>>> concept. This
>>> kind of re-consideration alters the speculative grammar we have for both
>>> KR and AI,
>>> helpful as we move the fields forward." (p 357)
>>>
>>> That also caused me to pen a general commentary on one aspect of the KR
>>> challenge, how to consider classes (types) versus individuals (tokens) [2].
>>> I would also argue these are now practically informed topics, among many,
>>> that augment or question older bibles like Brachman and Levesque.
>>>
>>> Best, Mike
>>> [1] https://www.mkbergman.com/pubs/akrp/chapter-17.pdf
>>> [2]
>>> https://www.mkbergman.com/2286/knowledge-representation-is-a-tricky-business/
>>>
>>> --
>>> __________________________________________
>>>
>>> Michael K. Bergman
>>> 319.621.5225http://mkbergman.comhttp://www.linkedin.com/in/mkbergman
>>> __________________________________________
>>>
>>>
>>> Dave Raggett <dsr@w3.org>
>>>
>>>
>>>
>>>
>> Dave Raggett <dsr@w3.org>
>>
>>
>>
>>

Received on Monday, 7 November 2022 15:15:27 UTC