Re: Intelligence without representation

Lacibus' description of Soda is now available in StratML format at 
http://stratml.us/drybridge/index.htm#LCBS

 From my perspective, given current IT capabilities, artificial 
ignorance 
<https://www.linkedin.com/pulse/artificial-ignorance-owen-ambur/> is 
required in order to fail to:

    a) enable the accumulation of intelligence from the bottom up, and

    b) pay more efficient and effective attention not merely to
    intentions but also results.

The vision of the StratML standard is: */A worldwide web of intentions, 
stakeholders, and results/*.

Owen


On 11/23/2019 7:44 AM, Lacibus - Chris wrote:
> Thanks, Dave - it looks pretty relevant to me!
>
> Regards
>
> Chris
> ++++
>
> Chief Executive, Lacibus <https://lacibus.com> Ltd
> chris@lacibus.net
>
> On 23 November 2019 at 12:16:19, Dave Raggett (dsr@w3.org 
> <mailto:dsr@w3.org>) wrote:
>
>> The idea that intelligence can emerge bottom up is consistent with 
>> theories of evolution where neural architectures will be selected 
>> that offer better chances for survival and reproduction. This will 
>> include ways to speed up learning through embedding prior knowledge, 
>> and effective exploitation of past experience.  This further points 
>> to the benefits for having a layered approach to representing and 
>> processing data, and how we are able to learn to recognise animate vs 
>> inanimate objects, and their identity, constituent parts and 
>> behaviours, starting from pixel level representations.
>>
>> Brooks reacted against assumptions of the need for central control 
>> and the idea of an agent sitting in the machine and doing the work on 
>> the representations, and central control is indeed not needed for 
>> simple organisms.  However, such central control is definitely a 
>> feature of the human mind where consciousness is associated with 
>> sequential rule execution by the cortico-basal ganglia circuit.
>>
>> Brooks further asserted that intelligent behaviour can be generated 
>> without the need for explicit manipulable internal representations. 
>>  However, that just begs the question of what that means in respect 
>> to different kinds of internal representations.
>>
>> For the human brain, we can use multiple levels of description, e.g. 
>> biochemical interactions in the synapses, associated chemical and 
>> electrical gradients, the transmission of pulses along nerve fibres, 
>> statistical correlation of pulse rates across bundles of fibres, and 
>> its relation to vectors in spaces with high dimensions, and to 
>> concepts with sets of properties, rules that operate upon them, 
>> goals, tasks, and emotions.
>>
>> The cortico-cerebellar circuit is perhaps closer to Brook’s 
>> subsumption architecture. This circuit provides a means for actions 
>> made at a conscious level to be devolved to a separate system of 
>> systems. The cerebellum acts a bit like a traffic controller 
>> coordinating the activation of many muscles based upon real-time 
>> information from the senses relayed via the cortex. This involves 
>> many systems acting in parallel using a layered approach to control.
>>
>> The above is a long way from the mindset of classical AI that Brooks 
>> was reacting against, and is grounded on progress on the scientific 
>> study of the human mind and behaviour as conducted in the cognitive 
>> sciences, rather than on a narrow conception of AI and KR.  Instead 
>> of focusing on manual development of knowledge representations, it 
>> would be advantageous to look at how these can be learned through 
>> interactions in the real world or simulated virtual worlds, drawing 
>> inspiration from the cognitive and linguistic stages of development 
>> of young human infants.
>>
>> This is almost certainly the wrong forum for discussing such ideas, 
>> but at least I have given you a sense of the approach I am exploring.
>>
>>
>>> On 23 Nov 2019, at 02:24, Paola Di Maio <paola.dimaio@gmail.com 
>>> <mailto:paola.dimaio@gmail.com>> wrote:
>>>
>>> I think I found the culprit, at least one of the papers responsible 
>>> for this madness of doing
>>> AI without KR
>>> https://web.stanford.edu/class/cs331b/2016/presentations/paper17.pdf
>>> I find the paper very interesting although I disagree
>>>
>>> Do people know of other papers that purport a similar hypothesis 
>>> (that KR is not indispensable in AI for whatever reason?)
>>> thanks a lot
>>> PDM
>>>
>>
>> Dave Raggett <dsr@w3.org <mailto:dsr@w3.org>> 
>> http://www.w3.org/People/Raggett
>> W3C Data Activity Lead & W3C champion for the Web of things
>>
>>
>>

Received on Saturday, 23 November 2019 16:46:30 UTC