Re: Forbes on next generation of AI

Oh, I wanted to say something about "modalities"...

But I hope it's ok to followup on that.

(I'm still digesting, reviewing all the background works...  Exciting work!)

Timothy Holborn.

On Fri, 3 Sep 2021, 1:51 pm Timothy Holborn, <timothy.holborn@gmail.com>
wrote:

> The thing that inspired me to start work on a solution where people would
> store their own data and share links, back in 2000, was my grandfathers
> counsins work on synapses (Eccles)
> https://plato.stanford.edu/entries/qt-consciousness/
>
> Note also "status of the observer"
> https://youtu.be/ZYPjXz1MVv0
>
> (Temporal considerations therein, a bit like the double slit tests, which
> can be thought of as ripples in a pond where the experiment is from a
> static point of time and only two interference projection / input points,
> obviously observational reality is far more sophisticated at any moment of
> time let alone when accumulated temporally, etc. Therein, somewhat
> "multidimensional" imo.).
>
> A far longer, yet still fairly remarkable note is:
> https://youtu.be/Xx0SsffdMBw
>
> But I've collected a few, one playlist is:
> https://youtube.com/playlist?list=PLCbmz0VSZ_voTpRK9-o5RksERak4kOL40
>
>
> IMO, it's fundamentally about ensuring agency with respect to the
> continuum linked to the ontological design function, whilst enabling a
> plurality of "universes" subject to common (sense) rules.
>
> Depending on course, the type of agent as is then associated to the
> ideology of the system administrators / business rules.
>
> Tethics or not to tethics, such an important question!
>
> IMO, the hard but correct design, can support "reality check tech", the
> ability to limit noise (or in a signal to noise ratio) as to ensure an
> enhanced capacity to debate the nuances linked to reality, which will be
> far higher bandwidth than mankind has a capacity to process for the
> foreseeable future; with or without a neurolink (not a fan).
>
> But the development pathway is very different pending how those sorts of
> philosophical design questions are considered and resolved with some degree
> of commitment.
>
> Common sense / causality, both important.
>
> Other than that, I still think AI is such a muddy term. It's almost like
> ICT, just so broad...
>
> Cognitive AI, does it encourage disassociative behaviours or act like a
> parity ram, with protective mechanism to protect against errors?
>
> One is far more energy efficient (less "consumptive") than the other,
> which is impactful on productivity, imo...
>
> Timothy Holborn.
>
> On Fri, 3 Sep 2021, 1:33 pm Paola Di Maio, <paola.dimaio@gmail.com> wrote:
>
>> Hay Dave and all
>> I think that what is being proposed as the future of AI is promoting
>> certain technical advances which are interesting but far from being
>> intelligence, for a number of reasons which I expound elsewhere
>> It is not AI, in the sense of autonomous intelligence, This intelligence
>> is just the result of some clevel algorithm and execution of
>> sophisticated maths. It is not intelligent at all,
>> as you point out, it fails basic intelligence tests :-) It cannot produce
>> anything that has not been encoded. It has no such ability.
>> We should not confuse advanced computation with intelligence
>> Can these methods deliver useful computational results and be applied
>> usefully?
>> Yes.  Are they intelligent? They Only encode some of the cognitive
>> functions of their developers
>> as well its limitations (Ie, if the programmer had designed a system
>> capable of answering out of the box questions, the AI would be able to
>> answer it)
>>
>> Intelligence by contrast is innate reasoning. Nobody programs the innate
>> intelligence of sentient being other than perhaps the brain washing that
>> comes with education/learning and its constraints
>> The question then is, can such natural intelligence be engineered?
>> It s not needed, and it is not desirable because innate intelligence in
>> human
>> is often suppressed and even punished. When individuals use their
>> intelligence they
>> start questioning the purpose of the machine/s (including society,
>> imposed norms)
>>
>> It s a long discussion
>> I reject that what is being purported as AI is intelligence at all
>> Sitting naked in the forest, ergo sum
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon> Virus-free.
>> www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
>> <#m_-5781712922886569934_m_-7074344953272062957_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>> On Thu, Sep 2, 2021 at 10:24 PM Dave Raggett <dsr@w3.org> wrote:
>>
>>> What do you think about the ideas in Forbes article on the next
>>> generation of AI?
>>>
>>> See:
>>> https://www.forbes.com/sites/robtoews/2020/10/12/the-next-generation-of-artificial-intelligence/
>>>
>>> Forbes believe in unsupervised learning, federated learning, and
>>> transformers for neural networks.
>>>
>>> Unsupervised learning (aka self-supervised learning) is based on
>>> “predicting everything from everything else”, e.g. language models from
>>> billions of documents. This avoids the bottleneck of having to label data
>>> for supervised learning, and is more flexible in allowing the learning
>>> system to figure out its own labels and "being able to explore and absorb
>>> all the latent information, relationships and implications in a given
>>> dataset.”
>>>
>>> Federated learning is about services that support privacy friendly
>>> machine learning by a third party across training data without having to
>>> transfer the data to that party. Instead, the learning process is applied
>>> locally to the data, and the results transmitted to the third party for
>>> aggregation with the overall model.
>>>
>>> Transformers are a technique for learning across sequences of things,
>>> e.g. words in text or frames of video, that is readily executed in parallel
>>> and computationally more efficient that previous techniques. This was first
>>> applied to language models to predict text following a previous text
>>> extract (e.g. BERT and GPT-3), but is now being applied more widely. e.g.
>>> to video.
>>>
>>> Whilst GPT-3 is pretty amazing in the quality of the text it can
>>> generate, it is limited in the kinds of reasoning it can apply. It knows
>>> simple generalisations, but is very limited in respect to reasoning about
>>> time, and is unaware as to what it doesn’t know. As an example, asking for
>>> the sum of two large numbers returns a large number, but not the actual
>>> sum, asking for the US president in 1610 returns a historical figure rather
>>> than stating that the question doesn’t make sense as the USA wasn’t in
>>> existence then.
>>>
>>> This is unsurprising as language models are not the same as higher level
>>> reasoning that children are taught at school and through interaction with
>>> their parents and peers.
>>>
>>> What do you think?
>>>
>>> Dave Raggett <dsr@w3.org> http://www.w3.org/People/Raggett
>>> W3C Data Activity Lead & W3C champion for the Web of things
>>>
>>>
>>>
>>>
>>>

Received on Friday, 3 September 2021 03:54:44 UTC