- From: Timothy Holborn <timothy.holborn@gmail.com>
- Date: Fri, 7 Apr 2023 21:28:19 +1000
- To: Paul Werbos <pwerbos@gmail.com>
- Cc: public-humancentricai@w3.org, peace-infrastructure-project@googlegroups.com, "Wunsch, Donald C." <dwunsch@mst.edu>
- Message-ID: <CAM1Sok2g79qb_Jr70F9UXjjaw5ROYqQnf4SJ9bDDxs_xwJBrnA@mail.gmail.com>
Hi Paul, I encourage you to join the https://www.w3.org/community/humancentricai/ so that we can work through the implications of some of these issues that are less likely to be canvassed in a way that you're - remarkably well equipped - to support... On Fri, 7 Apr 2023 at 19:40, Paul Werbos <pwerbos@gmail.com> wrote: > Thanks, Tim! > > This week I was reminded of a terrible paradox. We desperately need the > new type of Quantum AGI which I defined at WCCI2022 (attached), but if such > power is deployed BEFORE other crucial requirements for sustainable > intelligent internet, it could give too much power to people unable to use > it well. > It would be good if you were able to provide a brief on your vision of a Sustainable Intelligent Internet, and related considerations... Q: Is it reasonable to assume that the designs intend to employ IPv6 & related existing, emerging and future components? I have assumed that the emerging / emergent work on the 'sii' layers, do employ IPv6. Since the Chinese do know about this, and Xi JinPing personally called > for open transparent joint development of new tools to prevent backdoors > both in software and in hardware, I was hoping they could create a > foundation soon for avoiding the worst. But the latest announcements cast > great doubt on his interest in continuing a cooperative approach.. New > development of this specific technology in the west then becomes urgent and > essential, to prevent a gross instability in the balance of power. Your > emphasis on key pieces of technology for "networks of truth" is more > important than I appreciated even last week. > So, Whilst part of the express purpose of the W3C HumanCentricAI (HCAI) CG - is to support means to foster and develop a comprehension of the commonalities, and shared requirements to support many different forms / embodiments of 'Human Centric AI' systems - as well as being able to clearly distinguish HCAI ecologies / ecosystems, from other forms of AI, information/knowledge management topologies / ideologies, i do have some views. Principally, part of the security fabric believed to be required - is to create an endification fabric, that employs an array of cryptographic instruments; with 'semantics', and various other tooling, to support means through which the causal implications / consequences, of what it is that human beings do, both as individuals and as members of groups; engaged in activities relating to tools, legal, software, physical and otherwise - is able to be made associatively governable, in ways that employ spatio-temporal relations amongst the many other things; as to support root-cause analysis, and various other characteristics - leading to attribution, to natural persons, etc... My works / considerations in these areas, has been on-going over many years, and tooling has been produced to support means to apply these sorts of solutions, although not entirely finished; and i'm not sure what parts are specific to a particular type of implementation, vs. what constituencies are necessarily shared as a component of future w3c (perhaps also ISOC related) derivatives. There's a process of canvassing, and providing support for bringing together a 'meeting of minds', to establish the foundational - 'common-sense' requirements needed, to have a more capable discussion about how solutions might be made to actualise some of these opportunities / risk mitigation / safety protocol related, future capabilities, whilst requiring a safe environment to do so (ie: so that unintended yet more 'commercial' opportunistic potential, doesn't act to undermine the ability to at least, deliver diversity of solutions available for people), Which, forming the W3C HCAI CG - is now considered to be a milestone, for important yet often not well understood - field of endeavour / work, in a world that is filled with many convincing proclamations, that are nonetheless knowingly forms of 'confusion apparatus', not really part of a formally mediated 'reality check tech' fabric, whilst embattled by the legacy tech-debt implications, that have manifested as a consequence of our choices & priorities. the things that have been done, and the circumstances whereby other forms of tooling remains absent. So... in summary. I think we need to support quorum and discussion, encouraging people to share their views, concerns, ideals, objectives - stuff, that we can build a shared-values based ecology for the production of derivatives via a community of practice, and interoperability with others of different ideals; on a best efforts basis, as to seek to address some of these sorts of very important areas of concern, for all people... Yet also, again, being associative (rather than more simply, dissociative - as is linked with medical conditions referred to as DID, etc.); i am personally, more concerned about the issues that appear to exist whereby the tooling required to support systems of democracy, and other human rights related considerations, now seemingly still - requiring a great deal of work. Whilst i respect and support the cultures of people in all places, it is nonetheless a matter of fact - that different systems of government, to be ruled by law, and - again - the various other considerations made by the instruments produced in a manner that seeks to provide universally accepted articulations of values; that the tooling required to support systems of democracy, is not seemingly adequately addressed at this stage... which is not to suggest that fair-dealings, fair-terms, and various other implications would not materially benefit people engaging with one-another fairly, on a best efforts basis - with issues that will always arise, but that are also of instrumental distinction to acts of a malicious nature or intent to harm, act unlawfully (whether in relation to laws associated to peace, or law of war); understanding also, that AI systems give rise to emerging problems, not previously considered or translated from texts that were often produced, before broadband... that part of what i think we need to process and then define a tactical method to produce specific tooling to address, in my mind needs to pay particular attention to the special needs of liberalised democracies. I found this interview with Henry Kissinger[1], insightful. am aware that a book has recently been published[2] which is referred to in this article on time[3], but i haven't had time to get through it - indeed, there are various 'temporal challenges', and it would be impossible for me to get through all of the very good materials, including but not limited to books; as such, some way of fostering a method for us to benefit from the traits of wisdom of one-another that we may usefully share to figure out how to address some very significant problems, in a circumstance that - i feel - puts alot of important considerations, in an unfavourable position, now. I hope that the outcome will be that there is at least the option - to employ 'human centric ai' systems, that have safety protocols, such as to ensure people aren't required to decide the provider of their mindware for their lives, perhaps - from birth!?! Personally, i don't know how information is extracted from a LLM if someone wants to migrate their - lives - elsewhere, as such, i am not sure whether or not systems today have been built without a means to migrate away from it... far more 'sticky' than a free email address... but these, and various other potentialities - still need to be canvassed at this stage, afaik; and part of the problem is likely to be that many may not be aware of other options that are less developed than other well-known examples of today. therein also - re: human rights instruments[4] note that i've processed some of them into a table (google sheet)[5] changing some of the text, to refer the intended useful employment of those instruments to refer to terms between one-another; rather than, declarations and/or conventions - associated with representatives of states, in the quorum of the United Nations, etc... therein, it might be useful to review some of the sentiments and statements expressed by these documents, as a means to consider how complex ontological derivatives might be produced; as to make useful employment of these remarkable, historical works; in ways, that is seemingly considered at this stage, novel, and therefore difficult to understand[6]; one might assume, not because there is serious disagreement about the useful value and importance of the texts of these articles / documents, values and innate, 'inalienable rights' as have been written down. Finally also, this sheet on different types of 'artificial minds'[7], is intended to be usefully employed to support the exercise and communicability, of seeking to better define what sorts of 'artificial minds' / robots - it is that people want to make, what sorts of qualities do they want to create, and what are the sorts of characteristics and/or designs that they do not want, not for them, not for others and not as may otherwise be involuntarily applied upon the lives of their children... these sorts of considerations, are not merely of importance for the developed world, where people have energy and may for the vast majority of their life, have the capacity to own and operate their own computer. rather, the designs for the developing world, empowered by technology for the public good[8] - which is a worthy and important pursuit i've been doing my very best to support for many years; i made a note about 'digital slavery'[9], but it does not yet go into sufficient detail - and that method is considered to be sub-optimal. in anycase, as learning / community 'digital hubs'[10] (somewhat an extension of what i earlier called 'knowledge banks') are created - what are the requirements to keep people safe; both, during times of peace, and in circumstances of serious jeporody, inncluding but not limited to - the needs of refugees. Whilst there are meaningful and as yet unresponded to issues relating to the experiences of people who are not accused of any crime, and should otherwise be considered 'free', etc... still requiring support for human rights, rule of law, etc... given that this apparatus is seemingly not as well developed as we might consider to be better - in future - perhaps the area of greater urgency, is to define what the requirements might be, as could be a topic of discussion at the GDC (per below), etc. to ensure support for prisoners[11], as even prisoners have human rights[12] (and i've since learned also, that often people who are subject to criminal sentences often lose all of their online information, which is similar yet also more significant, than the issue of 'facebook prison' or similar; alongside other issues, i'd not previously considered); which really requires an interoperability solution, that i suspect is likely to be built around the work done on the W3C works known as Solid, whilst only speculating at this stage... It seems work is moving very quickly in some areas; whilst some of the foundational and material areas of concern are now perhaps - only just starting... i don't have alot of resources, and am making best efforts... but as noted, i encourage you to get involved and encourage others to do so. At maturity, the solutions that W3C is the appropriate venue to support, will only be a constituency of far broader requirements. IMO: The first step, is to make best efforts to bring people together, as to resource the foundations with the best possible form of quorum we're able to muster, then we'll need to figure out the charter for the group, which will involve works to define what we want Human Centric AI to mean. This initial process, is how credentials was first established[13] leading to additional efforts via UN[14][15][16] which i did try to make notes to better support[17]; and, consequentially now thereafter, as is amongst the instrumental tooling, leading to what is being progressed now[18][19][8] - which sadly, do not appear to address some of the very important concerns you have raised, as the means to ensure support for both 'mainframe' and inter-personal computing[20] systems, now reaches a new, fairly epic stage.. > But who in the west has a base for the actual core QAGI hardware tasks? > i'm not sure. but i signed the open letter[21] and rather than simply focusing on my own implementation, as i started coding last year[22][23], i have prioritised the work needed, to support human rights - not simply by me implementing in my specific implementation, but rather - seeking to support good entity-relations[6] and in-turn also, safety protocols; which started with the 'web civics' Human Centric AI[24] work, that in-turn led to forming https://www.w3.org/community/humancentricai/ The business systems, to support people who are focused on ensuring that they act honourably, as to support values / good moral hygiene, are very poorly supported by comparison to the more practical implications associated with the - somewhat reasonable - examples, provided by others who more simply focus on ensuring that they're paid, so they can support their basic needs, and those of their children, etc... its not like that, when you work to do the work, to ensure support for - moral hygiene, in-effect, and there's some very big questions that are raised, when seeking to address some of these issues, such as to ensure that people are paid fairly for good & useful work, how can digital slavery be addressed[9], particularly if the mere idea of a discussion about it - is considered by others to be defined as 'out of scope'... these issues are not simple. the implementation of hardware, that you refer to, empowers these business systems, as they are, based on what is functioning - not, in relation to semantics that might be within the hands, hearts and minds of people[25], without evidence of it being considered important, as demonstrated by the information consumed by these agents to define what we materially, in actuality - value, and what we do not, as was in-effect the topic curated in www2017[26] leading to the video[25], but seemingly - despite being told otherwise, by so many - wasn't - all being done already, they must not have 'known it all', because if they did, then the intepretation of my experiences, like the one that i found so overwhelming, at WSIS2023[6], to support the peace infrastructure project[27], incorporating considerations far beyond those i presented in 2016 about my works on seeking to better attend to the 'consciousness algorithm'[28], provide work, as to deliver derivative apparatus that supports consciousness[29] and addresses the complexities associated with issues such as various forms of digital (ai) slavery... but regardless of my conscious experience as an observer, our objective (rather than subjective) circumstances / 'common-sense' situational awareness position - as a manifest consequence of the what has been done, and what has not been done, in my view leads to circumstance where there is now alot of very important work, at least, in my opinion, that could and should be done - honourably, on a best efforts basis, to deliver fit-for-purpose tooling, asap. “the distinction between reality and our knowledge of reality, between reality and information, cannot be made” Anton Zeilinger <https://www.nature.com/articles/438743a> I wrote about how to define an AI Weapon in 2015[30], and maybe that's what has been made; whilst other things that could be made, like touch-screen voting machines, that print votes for counting whilst also logging the votes electronically for election night announcements[31] - well, afaik, they haven't been made; alongside the list of other things that haven't been made. preserving freedom of thought[32] as noted some years ago, may well be at jeopardy - but as henry strapp said, its about what we focus our attention on[33], and whilst not all people may want it - much as is the case when people decide if they want a computing system that runs linux, windows, OSX, etc... but one might hope, that now more than 75 years on[34] the idea of building knowledge system, even if its just to protect the selfish from the possibility that they would otherwise lack the sense-making / situational awareness required, to identify or be aware of AI / AGI engendered attacks - one might hope, that at some stage - the reason why these sorts of options are important, will become better understood by others - hopefully, peacefully... but either way, the creation of the W3C HCAI / Human Centric AI (noting, there is also 'human centered ai', which I consider to be different) CG; - is a step in the right direction. > Best regards, Paul > > Ditto, with much thanks. Long email, but I hope it helps... best wishes, Faithfully, Timothy Holborn. [1] https://www.chathamhouse.org/events/all/members-event/future-liberal-democracies-conversation-henry-kissinger [2] https://www.amazon.com.au/Age-AI-Our-Human-Future/dp/0316273805 [3] https://time.com/6113393/eric-schmidt-henry-kissinger-ai-book/ [4] https://www.ohchr.org/en/instruments-listings [5] https://docs.google.com/spreadsheets/d/17WfvOyoVQDv8wwPYroX6xrKLM7n3stD9vFv4rejqmo8/edit# [6] https://soundcloud.com/ubiquitous-au/my-question-human-rights-instruments-for-digital-wallets-valuescredentials [7] https://docs.google.com/spreadsheets/d/11P5X3al6DlFULOPU3w9-eeoDEyLqYmgqmsDm0U2wgsU/edit# [8] https://digitalpublicgoods.net/ [9] https://github.com/DPGAlliance/DPG-Standard/issues/167 [10] https://docs.google.com/document/d/1D63FlICIOXcnLx_PYs0ByLYvc1B-BXqhhXGdaW_rIMI/edit# [11] https://docs.google.com/document/d/10exQ8MIJnSWo2YSPJp8gUTpAz1ClcL16RgmrsOiI7uQ/edit# [12] https://www.youtube.com/watch?v=pRGhrYmUjU4 [13] https://www.w3.org/community/credentials/2014/08/06/call-for-participation-in-credentials-community-group/ [14] https://web.archive.org/web/20150908003750/https://id2020.org/ [15] https://web.archive.org/web/20160313103740/http://id2020summit.org /#speakers [16] https://docs.google.com/document/d/1GZIqlnkTzms3rIFKR18OR6VI-ksQqQGnkC5NKBgwyEw/edit [17] https://docs.google.com/document/d/1GCFSwPPXjyZQukFhMzjGlfP-rSIFRO-nX4CTs6oWZD8/edit?usp=sharing [18] https://play.itu.int/events/category/wsis-forum-2023/2023-03/ [19] https://play.itu.int/event/wsis-forum-2023-govstack-cio-digital-leaders-forum/ [20] https://cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1WXgSplqAB62oMSdwqli_1G3k37c0y6fZkZJLzc5Www8&font=OldStandard&lang=en&hash_bookmark=true&initial_zoom=4&height=650#event-interpersonal-computing [21] https://futureoflife.org/open-letter/pause-giant-ai-experiments/ [22] https://docs.google.com/document/d/1M_6gq_lV43xQdqa1jY0GdEvnZoFwIE_cqlwX5xOSBco/edit [23] https://docs.google.com/presentation/d/1nFZUVL3Uh4zB82F3ViU189Rz05ZPKmtZU3tDq4w8OZQ/edit [24] http://www.humancentricai.org/ [25] https://www.youtube.com/watch?v=e9vROTibKiE [26] https://2017.trustfactory.org/ [27] https://docs.google.com/presentation/d/1cq8rI71tR31vpklPiHcvoAqVC5EeGtD0M05DBedp1nE/edit# [28] https://docs.google.com/presentation/d/1RzczQPfygLuowu-WPvaYyKQB0PsSF2COKldj1mjktTs/edit# [29] https://cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1r-bo83ImIEjSCmOFFMcT7F79OnCHDOGdkC_g9bOVFZg&font=Default&lang=en&hash_bookmark=true&initial_zoom=4&height=750#event-consciousness-qm-ai-studies-video-edition [30] https://www.linkedin.com/pulse/definition-ai-weapon-timothy-holborn/ [31] https://www.linkedin.com/pulse/secure-digital-print-machines-trust-factory/ [32] https://www.webizen.net.au/about/executive-summary/preserving-the-freedom-to-think/ [33] https://www.youtube.com/watch?v=ZYPjXz1MVv0&list=PLCbmz0VSZ_voTpRK9-o5RksERak4kOL40&index=4 <https://www.youtube.com/watch?v=ZYPjXz1MVv0&list=PLCbmz0VSZ_voTpRK9-o5RksERak4kOL40&index=4> [34] https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ On Fri, Apr 7, 2023, 8:41 AM Timothy Holborn <timothy.holborn@gmail.com> > wrote: > >> >> >> ---------- Forwarded message --------- >> From: Digital-Compact <digitalcompact@un.org> >> Date: Fri, 7 Apr 2023 at 07:02 >> Subject: Global Digital Compact – Thematic Deep-Dive Internet Governance >> 13 April >> To: Digital-Compact <digitalcompact@un.org> >> >> >> Dear all, >> >> >> >> Please find attached a letter from the Co-facilitators with further >> information on the thematic deep dive *Internet Governance *scheduled >> for *Thursday, 13th April, starting at 10am ET* (New York time). >> >> >> >> To participate to this or other deep-dives, please register through this >> website >> <https://www.un.org/techenvoy/global-digital-compact/intergovernmental-process>. >> >> >> The guidance on how to inscribe on the speakers’ list will again be >> provided during the meeting. >> >> >> >> Please be reminded also that the deadline for submitting written inputs >> via the website <https://www.un.org/techenvoy/global-digital-compact> is >> coming up on 30 April 2023. Everyone is welcome to contribute! >> >> >> >> With best regards, >> >> The Office of the Envoy on Technology >> >> *___* >> >> >> >> [image: A screenshot of a computer Description automatically generated >> with low confidence] >> >> www.un.org/techenvoy/global-digital-compact >> >> #globaldigitalcompact >> >> E: digitalcompact@un.org >> >> >> >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "The Peace infrastructure Project" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to peace-infrastructure-project+unsubscribe@googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/peace-infrastructure-project/CAM1Sok3%2B%2BWFTDALqLZfosYgJOTduNVk0u%3DnewQD9kXp4bY5KyQ%40mail.gmail.com >> <https://groups.google.com/d/msgid/peace-infrastructure-project/CAM1Sok3%2B%2BWFTDALqLZfosYgJOTduNVk0u%3DnewQD9kXp4bY5KyQ%40mail.gmail.com?utm_medium=email&utm_source=footer> >> . >> For more options, visit https://groups.google.com/d/optout. >> >
Received on Friday, 7 April 2023 11:29:07 UTC