Re: the intersection between AIKR and COGAI

Hi Milton,

It's late here, so this is only my prompt response; I need to study your
posts more, but hope Paul will provide some feedback.

Whilst I've worked on means to produce solutions to support "human centric
AI" infrastructure, over a very long time, inspired to start on it in 2000
by the fables of my grandfathers (who was a pathology) cousins work:
https://dana.org/article/neuroscience-and-the-soul/

My concern is that it's not politically feasible to make this sort of thing
safely.

As such, my thinking is that it might be better to look at it, like
enabling people to buy their own robots.  Although, I am mindful of a
number of black mirror episodes.

Re: experts,

https://groups.google.com/g/scientific-basis-of-consciousness

And I do have some consciousness related stuff in here also:
https://drive.google.com/drive/folders/1tYFIggw8MIY5fD2u-nbwFRM6wqrhdmQZ

FWIW; it has appeared to me, over the years, that much of the TimBL
semantic web architecture was inspired by biological "mind" stuff, but
still beyond reach.

This one is so good: https://youtu.be/ZYPjXz1MVv0

Nb also; some resources per below,

“the distinction between reality and our knowledge of reality, between
reality and information, cannot be made” Anton Zeilinger

https://medium.com/webcivics/the-semantic-inforg-the-human-centric-web-reality-check-tech-50e2fa124ed4

https://medium.com/webcivics/theoretical-relationship-between-social-informatics-systems-and-quantum-physics-reality-check-6ce3781d1a29

https://youtube.com/playlist?list=PLCbmz0VSZ_voTpRK9-o5RksERak4kOL40

The requirements to protect human beings requires work, that goes right
across the full tech stack; alongside an array of social (legal) aspects,
so much - that's just not done, and I don't see how there's an apatite for
it.  If done badly, it'll only act to distort the observers concept of
"reality", self determination, etc.  In a way, that's like mankind building
tools to turn cockroaches into computers controlled robots.

Fairly horrific problems imo, I am deeply upset about it, but that's not
useful.

My notes: earlier today might also be helpful but basically, what I'm
suggesting is that we design cognitive AI agents / entities intentionally,
in a manner that suits what it is they are; what they're better equipped to
do & what they are not.
 https://lists.w3.org/Archives/Public/public-cogai/2022Oct/0017.html

Humans can't pull records from around the world in milliseconds, like
google does an unknown number of times every day.

Will these things be more like r2d2 or walle? Or more like Skynet.

Will kids, be designing themselves in the metaverse - or some next
generation of Tamagotchi, something that they own, but is not them.

I'll be working on this concept more, it needs alot of work; but I do hope
my considerations are found helpful somehow.

We need to figure out what to do with the reality of our circumstances not
how we'd like to disassociatively "believe" in some other fiction, but
rather the hard facts of our real world situation.  ATM, threats & people
living, subjected to forms of digital slavery are / is very real... The
digital identity solution defining us by our wallets, filled with not much
more than reissuable public keys - defining who we are as people...

That's part of what the world's been rolling out over the past few years,
with no known alternative available at scale.

So practically, I think, we all want to be able to buy ourselves our own
"robot", noting, of course it's the software that matters most; but
regardless of whether it looks like buzz lightyear or some mid journey
render; it won't be human, and it'll either be our friendly, reliable tool;
or our rulers toy.

At least, thats some of my fears...

W3c licensing doesn't preclude evil usecases, particularly when profitable.

Tim.h.

On Sat, 29 Oct 2022, 2:01 am ProjectParadigm-ICT-Program, <
metadataportals@yahoo.com> wrote:

> Dear Timothy and all,
>
> My suggestion for knowledge representation actually is limited to a small
> subset of what is considered AI. It allows for any conceptual frameworks to
> be used whether biologically inspired cognitive architectures (BICA),
> neural networks, human brain architecture inspired and a lot more.
>
> I think there is more or less a consensus within this AIKR CG that we aim
> to come up with conceptual frameworks for KR for open, explainable and
> ethical AI.
>
> By doing so we automatically define the boundaries for such. Anyone who
> wishes to venture outside those boundaries will be well aware (hopefully)
> that additional CONTAINMENT ALGORITHMS will be needed. But articles have
> been published recently that prove Stephen Hawking, Nick Bostrom and others
> right, that current state-of-the-art ML and neural network based AI cannot
> be contained.
>
> We use many models of neural architectures in the human brain to create
> algorithms, and some provide very impressive results, but with absolutely
> no clue HOW this is done.
>
> We also have absolutely no clue how and where information is stored and in
> what type of coding.
>
> This is what makes current ML and AI algorithms unpredictable and in the
> end uncontrollable.
>
> Empathy and ethics can be argued to be essential to creating controllable
> AI, but we have no idea  how to incorporate these into current
> state-of-the-art AI.
>
> My proposal eliminates this problem by redefining knowledge representation
> as the central focus and be able to define interaction systems that can be
> modeled as AI that are bounded in their types of transformational mappings
> and interaction processes.
>
> Milton Ponson
> GSM: +297 747 8280
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
> Project Paradigm: Bringing the ICT tools for sustainable development to
> all stakeholders worldwide through collaborative research on applied
> mathematics, advanced modeling, software and standards development
>
>
> On Friday, October 28, 2022 at 09:34:11 AM AST, Timothy Holborn <
> timothy.holborn@gmail.com> wrote:
>
>
> FWIW: I think the idea of artificial minds being rendered consciousness is
> an "ungodly" concept. Artificial minds being rendered in relation to
> property rights laws / asset related considerations, entirely plausible.
>
> I therefore think it's too dangerous to try to support peoples extension
> of self (digital twins) as it's likely to be something companies want to
> "own".  Whereas; the idea of democratisation ownership of AI agents - or
> robots (whether their in a phone or some sort of others physical object)
> doesn't really matter.
>
> https://twitter.com/WebCivics/status/1585976653867405312
>
> If humanity is under attack by dangerous robots, I'd like to have one that
> I own fighting for me, kinda like r2d2 but different.
>
> Timh.
>
> On Fri, 28 Oct 2022, 11:25 pm ProjectParadigm-ICT-Program, <
> metadataportals@yahoo.com> wrote:
>
> There may be a relatively easy way out of this confusion. But it starts
> with disentangling knowledge representation completely from AI.
>
> Following Dave Raggett's line of reasoning we posit knowledge
> representation to be a class of semiotic  (input) structured descriptions
> that lend themselves to analysis through logical, computational,
> mathematical and computability processes in order for these to create
> computable (output) algorithms, given a certain set of objects in an object
> system in physical reality (spatiotemporal defined set of confined spaces
> and objects therein) which together with a set of relevant interaction
> processes defining an interaction system.
>
> This way we eliminate the problem of distinguishing between structured
> data, information and knowledge.
>
> For this interaction system we now define classes of transformational
> mappings for the interaction system, (1) dealing with sensory input through
> observation, (2) converting the observation datasets to formats to compare
> to existing instances in the structured descriptions, (3) exchanging or
> passing observed datasets to another structured description, (4) add,
> delete, edit or deprecate instances to the structured description, (5)
> trigger actions in the interaction system..
>
> We can now use all mathematical, computer science, computability, and
> mathematical tools from theoretical physics, representation theory, and
> category theory to produce generalizations  of the basic components, being
> structured descriptions and interaction systems to build increasingly
> complex sets.
>
> Note that the concepts of mind, consciousness and sell awareness are
> avoided, but openness and explainability become embedded.
>
> Mind and consciousness come into play if we contemplate artificial general
> intelligence.
>
> And in doing so we avoid any ontological and epistemological discussions
> with philosophers, because those only arise at the AGI level.
>
> Milton Ponson
> GSM: +297 747 8280
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
> Project Paradigm: Bringing the ICT tools for sustainable development to
> all stakeholders worldwide through collaborative research on applied
> mathematics, advanced modeling, software and standards development
>
>
> On Thursday, October 27, 2022 at 09:05:10 PM AST, Paola Di Maio <
> paoladimaio10@gmail.com> wrote:
>
>
> Thank you all for contributing to the discussion
>
> the topic is too vast - Dave I am not worried if we aree or not agree, the
> universe is big enough
>
> To start with I am concerned whether we are talking about the same thing
> altogether. The expression human level intelligence is often used to
> describe tneural networks, but that is quite ridiculous comparison. If the
> neural network is supposed to mimic human level intelligence, then we
> should be able to ask; how many fingers do humans have?
> But this machine is not designed to answer questions, nor to have this
> level of knowledge about the human anatomy. A neural network is not AI in
> that sense
> it fetches some images and mixes them without any understanding of what
> they are
> and the process of what images it has used, why and what rationale was
> followed for the mixing is not even described, its probabilistic. go figure.
>
> Hay, I am not trying to diminish the greatness of the creative neural
> network, it is great work and it is great fun. But a) it si not an artist.
> it does not create something from scratch b) it is not intelligent really,
> honestly,. try to have a conversation with a nn
>
> This is what KR does: it helps us to understand what things are and how
> they work
> It also helps us to understand if something is passed for what it is not
> *(evaluation)
> This is is why even neural network require KR, because without it, we don
> know what it is supposed
> to do, why and how and whether it does what it is supposed to do
>
> they still have a role to play in some computation
>
> * DR Knowledge representation in neural networks is not transparent, *
> *PDM I d say that either is lacking or is completely random*
>
>
> DR Neural networks definitely capture knowledge as is evidenced by their
> capabilities, so I would disagree with you there.
>
>
> PDM  capturing knowledge is not knowledge representation, in AI,
> capturing knowledge is only one step, the categorization of knowledge is
> necessary to the reasoning
>
>
>
>
>
>
> *We are used to assessing human knowledge via examinations, and I don’t
> see why we can’t adapt this to assessing artificial minds *
> because assessments is very expensive, with varying degrees of
> effectiveness, require skills and a process -  may not be feasible when AI
> is embedded to test it/evaluate it
>
>
> We will develop the assessment framework as we evolve and depend upon AI
> systems. For instance, we would want to test a vision system to see if it
> can robustly perceive its target environment in a wide variety of
> conditions. We aren’t there yet for the vision systems in self-driving cars!
>
> Where I think we agree is that a level of transparency of reasoning is
> needed for systems that make decisions that we want to rely on.  Cognitive
> agents should be able to explain themselves in ways that make sense to
> their users, for instance, a self-driving car braked suddenly when it
> perceived a child to run out from behind a parked car.  We are less
> interested in the pixel processing involved, and more interested in whether
> the perception is robust, i.e. the car can reliably distinguish a real
> child from a piece of newspaper blowing across the road where the newspaper
> is showing a picture of a child.
>
> It would be a huge mistake to deploy AI when the assessment framework
> isn’t sufficiently mature.
>
> Best regards,
>
> Dave Raggett <dsr@w3.org>
>
>
>
>
>

Received on Friday, 28 October 2022 16:27:18 UTC