- From: Gabriel Lopes <gabriellopes9102@gmail.com>
- Date: Tue, 24 Oct 2023 02:10:12 -0300
- To: Paola Di Maio <paoladimaio10@gmail.com>
- Cc: Patrick Logan <patrickdlogan@gmail.com>, Dave Raggett <dsr@w3.org>, W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAHRA0=q=E-q5b2TXbq5zmy4+oNhkqoM2aYKTDCPQtUoTPpR+eg@mail.gmail.com>
Hello everyone, Thanks Paola for opening the subject, as our research fields are entangled to converge to Consciousness subject indefinitely more. It's indeed very interesting that we could arrive at a point where our views intersect, and can be implemented, such as linear perspective of time relating to our own convention; mathematical based models to interact with the modeled knowledge as it is presented; and even modeled contexts enabling operability over autonomous agents. It's also known that the parameters, by instance, are yet a limitation on nowaday AIs, as it is increasing significantly and sometimes making researches lose the track of evolutibility of the Network. Not taking into account how much images an AI needs to learn from its patterns, before reproducing them, compared to similar human capacity. IMHO, as Consciousness is literally an active topic since some thousands of years ago, bringing to humanity some really pieces of art, probably Neuroscience will probably continue to export vocabularies so we from Computer Science shall learn from, until we may develop a more complex and applicable theory of mind - thus, interpreting, remembering, and acting - bringing the syntactic and semantic interoperability into an abstraction. Personally, I'm still working in some perspective-view of human interaction, so maybe to use in GAMA simulations, etc., using ontologies as an instrument to a formal system with the goal to analyse the knowledge correctability distributed over the process of learning, thinking, and acting. But there's still a bottleneck between what we think about consciousness - *<quora> Is the Higgs field some kind of consciousness expression? </quora>* - and what we may technologically be able to represent.... *what reminds me of mr. von Neumann, who took from his own mind an architecture for computers.* Em seg., 23 de out. de 2023 às 21:34, Paola Di Maio <paoladimaio10@gmail.com> escreveu: > @Dave, we are not trying to convince anyone about anything, afaik :p > everyone should satisfy themselves as they choose (....) we try to make > plausible assertions that can be useful to navigate the complexity of this > discussion space > @Patrick thank you , please pick out some terms that you think should be > defined to support the discussion and lets start a thread. lets pin > something down, if we could manage to define some terminology we can > consider that a resource and take a step forward > > On Tue, Oct 24, 2023 at 12:33 AM Patrick Logan <patrickdlogan@gmail.com> > wrote: > >> There are several terms here without even semi-formal definitions that >> are doing a lot of work, i.e. your claims are vague and difficult to >> discuss clearly let alone measure and assess. >> >> Given the wide berth of interpretation it's especially bold to claim a >> false dichotomy of either one agrees with your "facts" or one is relying on >> "faith". >> >> On Mon, Oct 23, 2023, 10:42 AM Dave Raggett <dsr@w3.org> wrote: >> >>> From the AI KR and computational view consciousness isn’t a hard >>> problem. Subjective experience distils to information processing with >>> systems of neurons. Redness is just a vector of neural activation. Agents >>> have situational awareness, i.e. a model of their current environment and >>> goals, enabling them to decide on what actions to take. This also includes >>> models of other agent’s beliefs and goals, i.e. a theory of mind. Agents >>> also benefit from a model of past, present and future, i.e. a functional >>> episodic memory that complements encyclopaedic memory, such as birds fly >>> and dogs bark. Episodic memory enables agents to reason about cause and >>> effect, to understand intent, and to create and adapt plans. >>> >>> However, this won’t convince everyone. Plenty of people have beliefs >>> that are a matter of faith rather than of facts. That’s fine. But >>> engineering and science doesn’t work that way! AI will continue to evolve >>> and AGI is just a matter of time. I attach a picture that makes the point. >>> A stochastic synthesis of ideas as evidence that artistic sensibility can >>> be reduced to neural processing. >>> >>> >>> > On 22 Oct 2023, at 05:38, Paola Di Maio <paola.dimaio@gmail.com> >>> wrote: >>> > >>> > >>> > Consciousness is too huge a topic . Undecidable, too much can be said >>> about without ever reaching any conclusion, possibly because no single >>> theory or point of view can exhaust the subject. However >>> > I d like to suggest simply that it is tackled only in relation to AI >>> KR. Surely. consciousness is relevant to AI and to KR discussion and >>> potential standards. We should keep that in mind where possible and >>> parsimoniously limit our considerations accordingly >>> > >>> > I ll leave it to Carl to liaise with the WoT group, since he is a >>> member there and brought up the subject. >>> > I ll work on tidying up some of the resources shared on the list into >>> some form of coherent narrative when I can, that is my next task >>> >>> Dave Raggett <dsr@w3.org> >>> >>> >>> -- Gabriel Lopes *Interoperability as Jam's sessions!* *Each system emanating the music that crosses itself, instrumentalizing scores and ranges...* *... of Resonance, vibrations, information, data, symbols, ..., Notes.* *How interoperable are we with the Music the World continuously offers to our senses?* *Maybe it depends on our foundations...?*
Received on Tuesday, 24 October 2023 05:10:30 UTC