Is scalability the key property of knowlege graph?

Dear all:


When I first touch knowledge graph, I'm very confused. Different from the other AI theory,  it is not an pattern recognization algorithm which will  give some "output" given some "input"(such as classify algorithms) ,but a program language(such as owl,rdf) and database(such as neo4j) instead. So in my opinion, knowledge graph is more like a problem of engineering than mathematic theory.  


Then I realized that different from the pattern recognization algorithm, the knowledge graph is created aimed at making the computes all over the world to  communicate with each other with a common language, and I have a question: Is scalability the key property of knowledge graph?


There are many knowledge vaults edited by different language(such as owl,rdf ),but is it always hard to merge them and there is not a standard knowledge vault  on which  we can do advanced  development. So is it necessary to open a scalable  and standard knowledge vault so that everyone can keep extended it and make it more perfect just like linux kernel or  wiki pedia? What kind of knowledge should be contained in the standard knowledge vault so that it can be universal?  I imagine that the standard knowledge vault is an originator, and all of the other application copy the originator, then all of the other application can communicate under the same common sense, for example when a application decelerate ''night", all of the other application will know it's dark. 


As I know, the knowlege graph is implement as a query service, but is it possible to implement it  as a program language,just like c++,java? In this way ,the compute can directly know nature language, and human can communicate with compute with nature language, also a compute can communicate with another compute with nature language.

Received on Thursday, 13 June 2019 05:33:39 UTC