- From: Henry Story <henry.story@bblfish.net>
- Date: Sun, 21 Oct 2018 11:54:10 +0200
- To: 我 <1047571207@qq.com>
- Cc: Semantic Web <semantic-web@w3.org>
- Message-Id: <1F76412A-B860-41AB-B00C-3A96EBFC04B1@bblfish.net>
> On 21 Oct 2018, at 05:39, 我 <1047571207@qq.com> wrote: > > hello,everyone. > After some survey, I get some conclusion: > 1. Knowledge graph is a concept made by google. Not sure if it comes from Google. If you look at Category Theory you will see that Categories are graphs + paths in graphs. It is therefore a mathematical concept at the base of all contemporary mathematics - starting from the 1940ies, taken seriously in the 1960ies and then becoming more and more encompassing. > 2. Knowledge graph is a kind of knowledge engineering > 3. Knowledge engineering has two components: knowledge base and inference engine. One could look at it that way I suppose. > 4. Knowledge engineering has two schools of thought: rationalism and empiricism. There are thinkers such as Robert Brandom who would agree with you as to the split. One could think of it as starting with Hume's famous 2 part definition of causality, once in terms of counterfactuals, which he then drops immediately for epistemological reasons, in order then to consider only regularities: all one can see are patterns of repetition he argues. This seems reasonable, but it turns out that it quickly erodes all thought. See his course Analytic Pragmatism, Expressivism, and Modality: The 2014 Nordic Pragmatism Lectures http://www.pitt.edu/~brandom/currentwork.html <http://www.pitt.edu/~brandom/currentwork.html> Brandom argues that Kant's turn was due to this, and so was Hegel. Bertrand Russel put it back on the table with his phenomenological empiricism. Brandom argues that there is a way out of that, which is analytic pragmatism, a synthesis of those views. > 5. Rationalism derive rule-based approach and empiricism derive statistic-based approach. If you believe that you can only go on what you see, and you can only see patterns, then you will be forced to think in a statistics based approach. But you will then have trouble understanding what the statistical mathematics you are using is about, since mathematics is not something that makes sense to be analyzed statistically, at least not in so far as the justification of mathematics goes. Of course there are other reasons you may use statistics and probabilities that don't depend on the empirical/ rationalist opposition. You can give probability reasoning a basis in modal logic. See Lewis, D. (1980). A subjectivist’s guide to objective chance. In Ifs (pp. 267-297). Springer, Dordrecht. > 6.The rule-based approaches value the soundness and completeness of the inference engine and the statistic-based approach value the richness of the knowledge base. That can help explain major trends. But they are getting confused, especially with the development of impossible worlds semantics, or hyperintensional reasoning, where you need to take into account that people can have completely contradictory beliefs. (some think that this is Theresa' May's situation with regard to Brexit) > 7.The OWL is a kind of rule-based approach and the vector-based approach is a kind of statistic-based approach. Sometimes people use those tools that way. But logic can also be mapped into topological spaces, and so I guess vectors can also be used there. > 8.In the internet, the knowledge base is huge.It is always impossible to do a complex inference on a huge knowledge base. Yes, that is the situation Google is in, but not the situation you, I or others are, who may be using SoLiD Hyper-Apps are in. If we try to organize a meeting with our hyper-calendar, then this may involve a meeting actually happening. If I can no longer make it to a meeting then I am obliged to inform you of that. Rationality is the obligation that can be spelled out game theoretically that we be maintain our beliefs consistent, and that when inconsistencies arise we revise our beliefs, and also inform others of those revisions. That is true of pre-internet societies as it is true of telephone based one, the current hyper-text web, and the future hyper app web. It sometimes make sense to think probabilistically about things, to essentially leave options open as far as possible, but at some point one has to be determinate about a decision: either you are at the meeting place or not. > > My question is: In google, is the statistic-based approach the main approach? Is the OWL used in google’s knowledge graph? Besides Google search engine, is there any other successful application of the knowledge graph? Besides knowledge graph, is there any other successful knowledge engineering in the industrial world? Google search has to use statistical based approaches since it is dealing with vast amounts of information where it is can neither control the veracity of the information, nor make decisions based on it, and its service is to find patterns in huge data structures that people can use to make connections. It is therefore in the purely empirical position of having to work only with patterns when dealing with data for which it cannot ascertain the truth. Still, the value of the page rank algorithm reposes on the idea that people are making links that they find valuable, and that there are ways of distinguishing true from fake people. It gives higher rank to pages coming from universities or other respected institutions, who regulate it. But Google is not just based on statistical reasoning. Consider services such as the excellent http://flights.google.com/ <http://flights.google.com/> it does use data from providers with which it has a legal relation, and there it used databases which are knowledge graphs, in a way that is similar to other relational databases. It does a better job because it uses topological geographical information to find airports close to a particular location without forcing the end user to work out which those might be, It also brings in a lot of other data to fill in the picture as to flight regularity for example. So really you should ask yourself: what is a statistical approach, when is it correct to use one, and when is it incorrect. A probabilistic approach is correct when you are looking at a spread of possibilities and you want to ascertain the relative size of each. You can also use statistics to help speed up reasoning to find heuristics to get you to the right answer. But at some point statistics does require you to work with data that has a relation to the real world, or that is at least under the obligation to be so related. (so that you can go to court in case a product is defective for example). > > > >
Received on Sunday, 21 October 2018 09:55:30 UTC