Re: Tuple Store, Artificial Science, Cognitive Science and RDF (Re: What is a Knowledge Graph? CORRECTION)

On Wed, Jun 26, 2019 at 5:21 PM Patrick J Hayes <phayes@ihmc.us> wrote:

>
>
> On Jun 26, 2019, at 8:45 AM, Marco Neumann <marco.neumann@gmail.com>
> wrote:
>
> Pat,
>
> for completness sake, where do you place description logic (DL) in this
> context as a foundation for semantics on the web?
>
>
> DL is a subset (actually various subsets) of FOL which are carefully
> designed to be decideable, by restricting their expressive power in various
> ways. In my own view (a minority view, I will concede) decideability is of
> little practical importance, and the elaborate care with detailed
> restrictions that is necessary to stay decideable is a barrier to effective
> use. I call it logic with trainer wheels. But for sure, OWL is better than
> nothing, and OWL reasoners exist, so... :-)
>
> BTW, the last time I spoke to anyone on the topic, the Manchester group,
> who are leaders both in DL development and in fast, powerful FOL reasoning
> engines, had run benchmarks and found that their best FOL reasoners were
> comparable in performance to the DL engines. But that was some years ago,
> and I do not know what the current situation is.
>

thanks Pat, is this now only a question of using Description Logics
(tableaux) vs Disjunctive Datalog (datalog) for reasoning that saves the
day for you and the Semantic Web here?

Since you mention that you would be ready to disengage with the Semantic
Web effort under certain conditions, do you not see any value for the AI
Logic community in lots messy data a la Google Knowlege Graph, some Linked
Data and a little rules based inferencing?




> Pat
>
>
>
>
> On Wed, Jun 26, 2019 at 4:29 PM Patrick J Hayes <phayes@ihmc.us> wrote:
>
>> A quick remark:
>>
>> On Jun 26, 2019, at 8:03 AM, Dave Raggett <dsr@w3.org> wrote:
>>
>> I very much agree and have been arguing for a blend of symbolic and
>> statistical techniques using insights from decades of work in Cognitive
>> Psychology.  Rational belief is about what can be justified given prior
>> knowledge and past experience.
>>
>>
>> So far in this thread we have been talking about knowledge representation
>> notations. You are here talking about mechanisms, not quite the same topic.
>> I entirely agree about the need to put together symbolic and statistical,
>> but I don’t see any reason why the use of the statistical would change the
>> nature or the semantics of the symbolic. (Do you?)
>>
>> This is not infallible, but nonetheless very useful in practice. It can
>> support higher order reasoning, something that is essential for modelling
>> human reasoning.
>>
>>
>> What kind of higher-order reasoning are you referring to here? The term
>> ‘higher-order’ has various meanings. If you simply mean that the logic can
>> mention, describe and quantify over properties and relationships as
>> first-class entities, then I would agree; but versions of FOL, even RDF,
>> can do that.
>>
>> Here is a test case for what I called ‘classical higher-order’ in an
>> earlier message. Do these facts:
>>
>> (P a)
>> (Q b)
>>
>> entail this higher-order statement:
>>
>> exists (X) (X a) & (X b)
>>
>> ? If not, then your logic is not what I would call higher-order.
>> The higher-order derivation mentions the property   lambda (x)( (P x) or
>> (Q x) )
>>
>> Pat
>>
>>
>> On 26 Jun 2019, at 14:54, Chris Harding <chris@lacibus.net> wrote:
>>
>> Formal logic is just one aspect of human reasoning (applied more or less
>> correctly, depending on the human in question). Human reasoning has other
>> aspects, giving it capabilities that formal logic does not have. For
>> example, it can handle inconsistencies. If the goal of AI is to approximate
>> human reasoning using computers, then its representational structures must
>> go beyond those of formal logic.
>>
>> Patrick J Hayes wrote:
>>
>>
>>
>> On Jun 25, 2019, at 6:06 PM, Amirouche Boubekki <
>> amirouche.boubekki@gmail.com> wrote:
>>
>>
>>
>> Le mar. 25 juin 2019 à 19:23, Patrick J Hayes <phayes@ihmc.us> a écrit :
>>
>>>
>>>
>>> On Jun 23, 2019, at 5:35 PM, ProjectParadigm-ICT-Program <
>>> metadataportals@yahoo.com> wrote:
>>>
>>> Again, let us look at the issue at hand. Artificial intelligence
>>> requires we represent knowledge in some format. All forms brought to the
>>> fore so far stick to a pretty simple way of representing knowledge.
>>>
>>>
>>> Most (all?) of the KR proposals put forward in AI or cognitive science
>>> work have been some subset of first-order predicate logic, using a variety
>>> of surface notations. There are some fairly deep results which suggest that
>>> any computably effective KR notation will not be /more/ expressive than FO
>>> logic. So FOL seems like a good ‘reference’ benchmark for KR expressivity.
>>>
>>
>>
>> > "Computably effective KR"
>>
>> That is one of the issue I try to address.
>>
>> > KR notation will not be /more/ expressive than FO logic
>>
>> Citation?
>>
>>
>> OK, this will take a little exposition. Notice up front that I said the
>> results /suggest/ something, not that they establish it beyond all doubt..
>>
>> The main result in question is called Lindstrom’s theorem. What it says,
>> technically, is that any logic (= a descriptive KR notation with a clear
>> semantics) which satisfies two conditions must be no stronger than FOL. The
>> two conditions are (1) compactness and (2) downward Lowenheim-Skolem (L-S).
>> OK, I won’t try to prove this here, but it is a theorem, OK? So bear with
>> me while I try to give an intuitive account of what these two conditions
>> mean, and why they are plausibly required for computational effectiveness.
>> They can be intuitively summarized as the conditions that proofs can be
>> finitely wide and finitely deep.
>>
>> Compactness means that if something follows from a set of sentences, then
>> it must follow from a finite subset of them. Put simply, proofs have to be
>> finitely “wide”. This might seem kind of obvious, but there are quite
>> natural logics which don’t satisfy it. For example, suppose we had some
>> axioms for arithmetic which enabled one to prove that 0<1 and 1<2 and 2<3
>> and… so on for every numeral N. Can you infer that x<x+1 for every number
>> x? Seems obvious, but an actual proof of this would have infinitely many
>> inputs. Compactness rules out things like this. Computationally this seems
>> extremely plausible, since we cannot get an infinite proof into any
>> physical memory.
>>
>> The (downward) L-S theorem is a bit harder to grok. It says that if a set
>> of sentence in the logic has any satisfying interpretation, then it has a
>> countable one. So if you can show that there isn't a countable one, then
>> you know there isn’t one at all. So what? Well, the key point here has to
>> do with how inference machinery operates. All inference systems can be seen
>> as ways of surveying all possible interpretations, looking for
>> counterexamples. You know that B follows from A when you can show that
>> there are no counterxamples, ie no interpretations which make A and (not B)
>> true. If your survey of interpretations is systematic and thorough, then
>> your logical inference machinery is correct. But any computational search
>> process can only generate finite structures. Now, /countably/ infinite
>> structures are fine, because counterexamples will be finite and hence will
>> be found eventually (this is based on a classical result called Koenig’s
>> lemma). So, in brief, the L-S theorem condition means that a finite search
>> through possible countable interpretations (which is the best that can be
>> done with finite machines) can be an effective complete search, In other
>> words, proofs that are finitely deep are enough, if the logic satisfies
>> this condition. So logics that don’t (such as classical /higher-order/
>> predicate logic) are kind of ruled out as computationally plausible logics
>> anyway.
>>
>> OK, this is a very abbreviated summary of the reasoning, but the main
>> takeaway point is that these conditions, although maybe a bit
>> abstruse-seeming, really are very plausible conditions for any reasonable
>> KR notation which comes with reasoning machinery. And Lindstrom’s theorem
>> is, well, a theorem.
>>
>> Hope this helps.
>>
>>
>> > So FOL seems like a good ‘reference’ benchmark for KR
>>
>> What about things like Probabilist Logic Network (or Bayesian networks)?
>>
>>
>> I do not know for sure, but I would guess that a result similar to
>> Lindstrom’s would apply to logics with any kind of truthvalues, including
>> probabilities. My own, much more subjective view, is that probabilities are
>> simply the wrong model for KR. For just one observation, people are
>> absurdly poor at making probability estimates. But I won't try to justify
>> this view here :-)
>>
>>
>> By the way, OpenCog projects was very suspicious of my work when I cited
>> RDF. If you are interested I can create a document describing how their
>> database
>> called atom space works, so called, hypergraph database.
>>
>> And the those people are not alone. Other people told me RDF is deadend
>> in terms of
>> of (modern) KR for AI.
>>
>>
>> I might agree with that conclusion. For AI purposes, RDF is absurdly weak
>> and inexpressive. But AI is not what it is trying to do.
>>
>> Pat
>>
>>
>> But still, I am here :)
>>
>>
>>
>>>
>>>
>>> What we should be looking for is a generalized form in which objects can
>>> be linked. The graph is an obvious form.
>>> But we are focusing to much on the nuts and bolts level.
>>>
>>> Since it is the generally accepted intention to use AI in all walks of
>>> professional, commercial, personal and academic life, we should be looking
>>> at the various ways of representing knowledge.
>>>
>>>
>>> Otherwise we end up creating knowledge representation silos.
>>>
>>>
>>> Avoiding KR silos was one of the primary goals of the entire
>>> semantic-web linked-data initiative. But this has many aspects. First, we
>>> need to agree to all use a common basic notation. Triples (=RDF =Knowledge
>>> Graph =JSON-LD) has emerged as the popular choice. Getting just this much
>>> agreement has taken 15 years and thousands of man-hours of strenuous effort
>>> and bitterly contested compromises, so let us not try to undo any of that,
>>> no matter what the imperfections are of the final choice.
>>>
>>
>> For the record, I don't try to undo that. As a new actor, I am working
>> toward it. As any newbie, I may ask some questions badly, that could lead
>> you to think that I want a revolution.
>>
>>
>>> The next stage, which we are just getting started on, involves agreeing
>>> on a common vocabulary for referring to things, or perhaps a universal
>>> mechanism for clearly indicating that your name for something means the
>>> same as my name for that same thing. This seems to be much harder than the
>>> semantic KR pioneers anticipated.
>>>
>>
>> Good question.
>>
>>
>>> The third stage involves having a global agreement on the ontological
>>> foundations of our descriptions, what used to be called the ‘upper level
>>> ontology’. This is where we get into actual metaphysical disagreements
>>> about the nature of reality (are physical objects extended in time? How do
>>> we handle vague boundaries? What are the relationships between written
>>> tokens, images, symbols, conventions and the things they represent? What is
>>> a ‘background’? What is a ‘shape’? Is a bronze statue the same kind of
>>> thing as a piece of bronze? What changes when someone signs a contract?
>>> Etc. etc., etc.) This is where AI-KR and more recently, applied ontology
>>> engineering (not to mention philosophy) has been working for the past 40 or
>>> 50 years, and I see very little hope of any clear agreements acceptable to
>>> a large percentage of the world’s users.
>>>
>>
>> Pragmatic self: forget about that part from specification?
>>
>>
>>> Category theory diagrams, graphs and Feynman diagrams are three well
>>> known forms of representing knowledge graphs, but only in semantic web
>>> technologies we specify tuples, a restrictive form of representation.
>>>
>>> Category diagrams and Feynman diagrams are meaningful only within highly
>>> restricted and formal fields (category theory and quantum physics,
>>> respectively) so have little to do with general KR. If your point is that
>>> diagrams are useful, one can of course point to many examples of them being
>>> useful to human users, but this does not make them obviously useful in
>>> computer applications.
>>>
>>> Tuples are not more restrictive than graphs, since a collection of
>>> tuples is simply one way to implement a graph. Tuple stores ARE graphs.
>>>
>>
>> I would not say: "tuple stores are just [property] graph". Because my
>> implementation is much different. But I agree tuple store are some kind of
>> graph.
>>
>> For the record, the idea of the n-tuple store (or chunks store) came from
>> the need to version a quad store to factor some code.
>> Later I discovered it could me useful in other contexts: provenance,
>> quality, space, some kind of time.
>> Again, the nstore, is a performance trick. What you can do with a triple
>> store you can do with nstore,
>> performance will be different, nstore should be faster. I am by no means
>> trying to force the WG to adopt the proposal I made on github
>> <https://github.com/w3c/sparql-12/issues/98>,
>> I hope to learn something from the conversation, and I already did.
>>
>>
>>
>>> Best wishes
>>>
>>> Pat Hayes
>>>
>>>
>>> Milton Ponson
>>> GSM: +297 747 8280
>>> PO Box 1154, Oranjestad
>>> Aruba, Dutch Caribbean
>>> Project Paradigm: Bringing the ICT tools for sustainable development to
>>> all stakeholders worldwide through collaborative research on applied
>>> mathematics, advanced modeling, software and standards development
>>>
>>>
>>> On Sunday, June 23, 2019, 3:57:01 AM ADT, Paola Di Maio <
>>> paoladimaio10@gmail.com> wrote:
>>>
>>>
>>>
>>>
>>> Chunks are also used in NLP (which is part of/related to CS either way)
>>> aka tokens
>>> Various useful references come up on searching chunks as tokens
>>>
>>> https://docs.oasis-open.org/dita/v1.2/os/spec/archSpec/chunking.html
>>>
>>> https://www.oxygenxml.com/doc/versions/21.1/ug-editor/topics/eppo-chunking.html
>>>
>>> On Sun, Jun 23, 2019 at 1:12 AM Dave Raggett <dsr@w3.org> wrote:
>>>
>>>
>>>
>>> On 22 Jun 2019, at 14:54, Amirouche Boubekki <
>>> amirouche.boubekki@gmail.com> wrote:
>>>
>>> Le ven. 21 juin 2019 à 16:27, Dave Raggett <dsr@w3.org> a écrit :
>>>
>>> Researchers in Cognitive Science have used graphs of chunks to represent
>>> declarative knowledge for decades, and chunk is their name for an n-tuple.
>>>
>>>
>>> I tried to lookup "graph of chunks" related to cognitive science. I
>>> could not find anything interesting outside this white paper about
>>> "accelerating science" [0] that intersect with my goals.
>>>
>>> [0]
>>> https://cra.org/ccc/wp-content/uploads/sites/2/2016/02/Accelerating-Science-Whitepaper-CCC-Final2.pdf
>>>
>>>
>>> Chunks are used on cognitive architectures, such as ACT-R, SOAR and
>>> CHREST, and is inspired by studies of human memory recall, starting with
>>> George Miller in 1956, and taken further by a succession of researchers..
>>> Gobet et al. define a chunk as “a collection of elements having strong
>>> associations with one another, but weak associations with elements within
>>> other chunks.” Cognitive Science uses computational models as the basis for
>>> making quantitive descriptions of different aspects of cognition including
>>> memory and reasoning. There are similarities to Frames and Property Graphs.
>>>
>>> Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
>>> W3C Data Activity Lead & W3C champion for the Web of things
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> Regards
>>
>> Chris
>> ++++
>>
>> Chief Executive, Lacibus <https://lacibus.net/> Ltd
>> chris@lacibus.net
>>
>>
>> Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
>> W3C Data Activity Lead & W3C champion for the Web of things
>>
>>
>>
>>
>>
>>
>>
>> --
>
>
> ---
> Marco Neumann
> KONA
>
>
> --


---
Marco Neumann
KONA

Received on Wednesday, 26 June 2019 18:27:03 UTC