Re: neural networks being purported as KR?

We need a combination of symbolic and sub-symbolic approaches, as has long been argued by John R. Anderson at CMU. His work on ACT-R embeds statistical information as part of graphs and rules. Natural language shows that each of us are able to process rich graph representations, but somehow this is done on top on spiking neural networks that are much more complex than today’s artificial neural networks and which operate in different ways. Some challenges to explain include how the brain can learn from very much less data than is the case for artificial neural networks, how can spiking neural networks support the symbolic representations needed for language, and likewise, how can spiking neural networks implement rule sets that operate on these representations.

Cognitive neuroscience provides some clues, e.g. neuronal spiking patterns appear regular in motor areas, random in the visual areas, and bursty in the prefrontal area. Some have suggested that working memory holds temporary graph representations, e.g. parse trees for natural language utterances, as tensor expressions over noisy n-dimensional spaces corresponding to neural firing patterns. As far as I am aware, we still have a long way to go to explaining how this can work at scale in practice.


> On 26 Jul 2019, at 16:00, Agnieszka Ławrynowicz <agnieszka.lawrynowicz@cs.put.poznan.pl> wrote:
> 
> Hi All, 
> 
> Of course deep neural networks may be seen as form of knowledge representation in my opinion, more precisely they are sub-symbolic or connectionist representations versus symbolic representations which are the standard in Semantic Web. 
> Though it is not „latest KR”, but have been there for a long time under exactly the above name (sub-symbolic representations). 
> 
> Best Regards and cheers,
> Agnieszka
> 
> 
> 
>> Wiadomość napisana przez Diogo FC Patrao <djogopatrao@gmail.com <mailto:djogopatrao@gmail.com>> w dniu 26.07.2019, o godz. 16:40:
>> 
>> Hi Paola
>> 
>> I'd say a NN is not as "knowledgy" as a decision tree. I would argue that NN is a mathematical model that compiles previous data representing cause/consequences, so it's the same type of knowledge as, say, a logarithm table, versus the type of knowledge the infinte sum formula for evaluating logarithms would represent.
>> 
>> They certainly don't look the same thing to me.
>> 
>> Cheers,
>> 
>> dfcp
>> 
>> --
>> diogo patrão
>> 
>> 
>> 
>> 
>> On Thu, Jul 25, 2019 at 11:58 PM Paola Di Maio <paola.dimaio@gmail.com <mailto:paola.dimaio@gmail.com>> wrote:
>> Sorry to bang on this topic, but its the task at hand at the moment
>> 
>> I just found an article, which is good scientific survey then  purports NN as a type of KR
>> (casually sneaks in NN as the latest KR)
>> 
>> This is published in a Springer peer reviewed publication and my makes all of my hairs stand up on my head
>> 
>> This is the kind of rubbish that without further qualification is being passed down
>> as the latest research, and  which the future generations of AI scientists are being fed-
>> 
>> wonder if anyone else has a problem with this proposition
>> (sign of the times?)
>> I am doing my best within my means to identify and contain this peril
>> 
>> Article https://link-springer-com.nls.idm.oclc.org/article/10.1007/s00170-018-2433-8 <https://link-springer-com.nls.idm.oclc.org/article/10.1007/s00170-018-2433-8>  
>> 
>> A survey of knowledge representation methods and applications in machining process planning
>> 
>> The machining process is the act of preparing the detailed operating instructions for changing an engineering design into an end product, which involves the removal of material from the part. Today, machining ...
>> 
>> Xiuling Li, Shusheng Zhang, Rui Huang… in The International Journal of Advanced Manu… (2018)
>> 
>> 
>> 
> 

Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
W3C Data Activity Lead & W3C champion for the Web of things 

Received on Friday, 26 July 2019 15:49:39 UTC