- From: Patrick Logan <patrickdlogan@gmail.com>
- Date: Tue, 24 Oct 2023 11:45:16 -0700
- To: Dave Raggett <dsr@w3.org>
- Cc: paoladimaio10@googlemail.com, W3C AIKR CG <public-aikr@w3.org>
- Message-ID: <CAD_aa-9JshgF7ZU-v92f4_fbm5xOWVAnUbszssMpMniH2-C3nA@mail.gmail.com>
Thank you Dave and Paola for your responses. I'm won't be able to write much until later this evening but I appreciate the conversation. On Tue, Oct 24, 2023, 1:44 AM Dave Raggett <dsr@w3.org> wrote: > Hi Patrick, > > My aim was to encourage analytic discussion on an AIKR perspective on > consciousness rather than the many other potential perspectives. One could > argue about how to account for qualia from a philosophical perspective, but > that is very different from consideration of how colours are handled in > artificial neural networks, e.g. training a robot to count the number of > red objects in a camera view. If consciousness is seen as too overloaded a > term, then what word would be better for describing the subjective > experience of artificial agents? We could then discuss how that experience > depends on different capabilities, e.g. episodic memory, theory of mind, > behavioural norms, etc. Is that of interest to you? > > On 24 Oct 2023, at 00:33, Patrick Logan <patrickdlogan@gmail.com> wrote: > > There are several terms here without even semi-formal definitions that are > doing a lot of work, i.e. your claims are vague and difficult to discuss > clearly let alone measure and assess. > > Given the wide berth of interpretation it's especially bold to claim a > false dichotomy of either one agrees with your "facts" or one is relying on > "faith". > > On Mon, Oct 23, 2023, 10:42 AM Dave Raggett <dsr@w3.org> wrote: > >> From the AI KR and computational view consciousness isn’t a hard >> problem. Subjective experience distils to information processing with >> systems of neurons. Redness is just a vector of neural activation. Agents >> have situational awareness, i.e. a model of their current environment and >> goals, enabling them to decide on what actions to take. This also includes >> models of other agent’s beliefs and goals, i.e. a theory of mind. Agents >> also benefit from a model of past, present and future, i.e. a functional >> episodic memory that complements encyclopaedic memory, such as birds fly >> and dogs bark. Episodic memory enables agents to reason about cause and >> effect, to understand intent, and to create and adapt plans. >> >> However, this won’t convince everyone. Plenty of people have beliefs >> that are a matter of faith rather than of facts. That’s fine. But >> engineering and science doesn’t work that way! AI will continue to evolve >> and AGI is just a matter of time. I attach a picture that makes the point. >> A stochastic synthesis of ideas as evidence that artistic sensibility can >> be reduced to neural processing. >> >> >> > On 22 Oct 2023, at 05:38, Paola Di Maio <paola.dimaio@gmail.com> wrote: >> > >> > >> > Consciousness is too huge a topic . Undecidable, too much can be said >> about without ever reaching any conclusion, possibly because no single >> theory or point of view can exhaust the subject. However >> > I d like to suggest simply that it is tackled only in relation to AI >> KR. Surely. consciousness is relevant to AI and to KR discussion and >> potential standards. We should keep that in mind where possible and >> parsimoniously limit our considerations accordingly >> > >> > I ll leave it to Carl to liaise with the WoT group, since he is a >> member there and brought up the subject. >> > I ll work on tidying up some of the resources shared on the list into >> some form of coherent narrative when I can, that is my next task >> >> Dave Raggett <dsr@w3.org> >> >> >> > Dave Raggett <dsr@w3.org> > > > >
Received on Tuesday, 24 October 2023 18:45:33 UTC