Re: AI2

On 21 February 2015 at 16:30, Dave Raggett <dsr@w3.org> wrote:

>
> On 21 Feb 2015, at 00:54, Danny Ayers <danny.ayers@gmail.com> wrote:
>
> what if human intelligence is a local minimum
>
>
> It is.
>
> I’ve been looking at how to blend cognitive science, AI and statistical
> learning, and a striking thing about human intelligence is the way our
> consciousness is highly constrained. We are only able to focus on a few
> things at any particular time. This is compensated for by the distributed
> parallel processing in our sensory and motor systems.  It is like using an
> 8 bit computer for the rule engine run by the central processor and a large
> number of graphics accelerator chips for the I/O and memory systems.
>
> It is easy to imagine an AI that can focus on many things at the same time
> in a concerted way, analogous to a closely knit team of people.  Another
> thing is the variation in creativity, memory and other traits across
> people.  We could churn out AIs with optimal traits for their intended
> purpose.**
>
> This isn’t science fiction as I have already started to work on it,
> although I don’t expect it to bear fruit quickly. The first part is to
> implement a cognitive architecture based upon work by John R. Anderson and
> Marvin Minsky.  The second part is to create an evolving taxonomy of common
> sense and use it to create lesson plans for teaching AIs via natural
> language. For more details see:
>
>      http://www.w3.org/2014/10/29-dsr-wot.pdf
>
> Researchers on AI and the Semantic Web have been dazzled by logic and have
> largely ignored cognitive science. This will change as the power of the
> synthesis becomes self evident. In some ways it is surprising how long it
> has taken us to get to this point!
>
> ** Unlike Hawking, I am optimistic about this. If we are to address the
> challenges of climate change, resource depletion and over population, then
> AIs and the Web of Thought will help us to maintain a high standard of
> living.  We will create AIs with empathy, but will need to keep a tight
> check on those who want to pervert this and weaponize AIs!
>

AI is already here.  The nature of technology is that you dont see it
coming.  Surely this is quite evident to many familiar with the web or the
semantic web.

If there was ever a war with the machines, we would not see it.  It may
have been fought and lost already.  Their weapons are not guns and bombs,
they are convenience and utility.  But it doesnt matter.  What matters is
that intelligent machines are here, and they are here to stay.  Future
generations will judge their impact on us, not us today.

We are moving to a species that will achieve an equilibrium with
technology.  The great hope has to be that in that equilibrium we will
survive and hopefully thrive.  We either will be part of that long term
equilibrium or we wont, that's a natural state that is probably outside of
our control.  However, In 1000 years, maybe much less, possibly a bit more,
we'll be at that point -- if we get through the transition period without
doing something stupid like dropping some nuclear bombs on ourselves
(again!).

The way I see it, we are in the generation(s) that will define if we make
it into a long lived symbiosis with intelligent machines, of which the
(semantic) web will possibly be the best expression.  The good news is that
so far the machines have been apparently benign.  My personal view on this
is that the place to look for answers is in nature.  If we build the
machines with a certain natural beauty, inspired by mother nature, and our
own better natures, the chances are we're all going to get on just fine! :)


>
> —
>    Dave Raggett <dsr@w3.org>
>
>
>
>

Received on Saturday, 21 February 2015 15:52:49 UTC