Re: Artificial Minds & Webizen Design

The Webizen work is progressing, well i think.

Doc has developed
https://docs.google.com/document/d/11PowzV6lLZG1MbV5cVJiSRLNLiBKvNLB42pIw3OuURo/edit?usp=sharing


as has the supporting spreadsheet, listing different types of 'artificial
minds' / robots, in history.. noting, webizen intends to be a particular
type, that is unlike many other types..  in any-case,  the point of it, is
that it helps provide a 'frame' to discuss qualities and what's wanted /
unwanted, etc...  characterisations, in-effect.

https://docs.google.com/spreadsheets/d/1rqYC2E2BDIHBADAT7-9CabawkmYBJpBBf1KJO24D7ig/edit#gid=1503872436

As I go about seeking to update:
https://github.com/WebCivics/webizen.org-temp i'm looking at various forms
of bot tools, like AIML, Prolog - my thinking is that it would be good to
make one in cog-ai....

FWIW:  whilst i'm still working through 'privacy' (security) modelling;
i've been thinking about 'safety protocols' (as noted a little in the
webizen doc); therein, its desirable to have a very secure environment -
yet, there's a bunch of competing interests / values, of importance; to
protect people, at least - socially.

Maybe this in-turn leads to other cog-ai use-cases / modals?  idk yet. but
fyi.

let me know about the chat-bot.

Timothy Charles Holborn (timo)

On Tue, 1 Nov 2022 at 21:28, Timothy Holborn <timothy.holborn@gmail.com>
wrote:

> Hi all,
>
> (i've cc'd 'science of consciousness' group - as an FYI of sorts, they're
> assistance - as a group, significantly contributed towards my modern-day
> analysis of what became an options analysis).
>
> Whilst the method may be considered 'foolish', i've always been dedicated
> to 'freedom of mind', natural justice, etc.  as such; there's always been a
> false dichotomy[1] between 'freedom of mind' & peace infrastructure; vs.
> the commercial desires of entities whom seek to be rulers[2] without
> accountability rule us, without accountability[3]; whilst much has happened
> over the last ~decade, the RWW or CrossCloud[4] or 'SoLiD' concepts; now
> appear to 'out moded', much like the era of mainframes vs. desktop
> computing[5][6][7] that eventually led to NeXt[8] & the advent of
> WWW[9]...
>
> The vast majority of humanity, may not comprehend the difference between
> 'the web' and 'the internet'[10]; but at least, we're still at a time where
> its part of our 'living history', regardless of engendered means for
> people, humanity, to become confused & the (energy) cost linked with good &
> proper resolution...  We have timeline tools that have been made[11], but
> they're not easily used, too often; which, in terms of cognitive AI -
> appears to me to be a significant barrier of importance for humanity to
> resolve, in the interests of humanity and indeed therein also; human
> rights[2].
>
> These seemingly existential problems have worried me greatly.  The idea
> that people can have agency in relation to the records produced in relation
> to their lives, to at least - support the human rights of their children or
> indeed themselves also - to support #RealityCheckTech - in-effect, 'out of
> scope'.
>
> The problems related to these decisions - made by many - are enormously
> significant. So, as i've been puzzling the problem - with an interest to
> produce 'peace infrastructure' foundations; with respect to cognative AI
> and the considerations illustrated to me by Dave Raggett in particular,
> about 'artificial minds' the lens, of seeking to produce an
> artiface extension to self, like a prosthetic (i have a prosthetic eye
> since i was a baby, due to the science of the day - ~43 years ago); it
> doesn't appears as though that's a safe thing to support - it appears, that
> there is overwhelming support for - digital slavery...
>
> This has troubled me greatly.
>
> So, what i've come-up with as a solution - a way, to support the growth of
> CogAIs most important work to produce open-standards re: AI - is to think
> about how we can produce 'robots' that people own, as property.  Property
> law, is fairly well understood; whilst we may not be able to 'own our own
> minds' via 'web3'; we are able to own property, and if we own our own
> robots - they'll need to have access (therefore storage) of the data about
> us; in-order to be useful for us; which is kinda like, democratising AI in
> a way that's not unlike how the work of Jobs[5][6][7][8] worked to
> democratise computing...
>
> In consideration - my thinking - is to call these future 'ai robots' owned
> by humans; 'webizen'[12], which has some history to it as a name, intended
> to support a dignity enhancing outcome for those involved.
>
> Thereafter; i've started to create a list of different 'well known'
> 'robots (cognative AI in film/tv, etc.)[13]; which is intended to be a
> means to communicate - the difference between the sorts of things we're
> looking to create tools like CogAi to produce solutions that humanity wants
> (as 'webizen'); vs the alternatives, that humanity doesn't want; and
> thereby, assist in our design efforts, to figure out how to address the
> complex problems.
>
> however; this does in-turn, illustrate an 'intended' outcome; where,
> there's a difference between 'putting yourself in the metaverse'[14] vs.
> this concept of looking at it differently; that natural persons, can own
> their own robots - who can in-turn, fight the good fight on their behalf,
> to keep them safe in a world operated via cyber-infrastructure; a world,
> where the characteristics of 'robots' (software in a 'thing'); are
> different to those of humans, and many humans - do not have sufficient
> 'representation' to support human dignity or other values[15] as a
> consequence of the design priorities, of those, who've done it.
>
> anyone, suggested to have 'mental illness' issues or wahtever may be used
> to dismiss their complains; should be able to 'own their own robots' (like
> owning their own compute (rather than 'thin clients'); that can in-turn
> engage people who seek to harm them - via law - to protect, human dignity.
>
> Whilst i am quite sure, people will want to make different kinds of
> robots[13]; it would be good to work on producing a better chart of all the
> different ones that exist, which can be referred to for lay-people; so
> that, those who are expected to make decisions (ie: politicians, etc.); are
> able to know what sort of 'active artificial mind' they're support is
> expected to help bring about, and the characteristics of that 'thing' - in
> connection to the implications that will be in-turn, brought to have an
> impact upon its 'data subjects'[16]; noting, of course - the concept, isn't
> about making corrupt systems / processors / agents; but that, there's alot
> to draw from historical creative works, to 'think' about how they might
> work.
>
> The consequence of this approach - amongst other 'things'; is that, we
> don't need to triage how the human mind works; rather, we need to focus on
> how to build 'artificial minds' that aid humanity...
>
> IMO:  'art', as is the produce of work by human minds has an inextricable
> link, to character, to values; and i've started to edit my earlier 'values'
> slidepack (which was still a draft) to redirect the concept to this idea
> about producing 'robots' that people can own as property - and the values
> factors involved[17]; but as much as i am concerned about the ramifications
> of seeking to do good via open-standards work, only to see / experience,
> horrible outcomes / experiences; these 'things' will need to be
> interoperable; so,
>
> By all means, let me know if the concept is rejected - certainly also,
> learning why would be useful.
>
> i do hope, as is my intent, that the work on 'artificial minds' is
> enhanced as a consequence of the creative / work - that i've done; to
> figure out a potential solution, that i am unaware existed before.
>
> afaik; 'solid' or whatever its called + tooling; may be able to operate,
> on a mobile phone - or similar, something that a person owns as property;
> as to, in-directly, have some sense of 'human agency' in our cyber-realm.
> the list of 'different robots' (ai) that have been illustrated; range from
> types that we are not intended to be defined as 'webizen', through to
> others; that are intended to be supported - via work - to deserve the
> intended "re-defined" meaning - of that name...
>
> The spreadsheet[13] is open / able to be edited (similar to the tools
> sheet[18]); the format, is designed to support
> https://timeline.knightlab.com/ - although, this is early stage work; and
> as always, done without any kind of funding whatsoever other than my
> commitment to do work, considered important for the future of humanity; at
> a minimum, people should be provided an opportunity - to have choices; and
> as noted earlier[19]
>
> “the distinction between reality and our knowledge of reality, between
> reality and information, cannot be made” Anton Zeilinger[20]
>
> Cheers,
>
> Timothy Holborn.
>
> Links:
> [1] https://miro.medium.com/max/4800/1*5KzkYHRy0B_OKP3aagsqhg.png
> [2] https://www.youtube.com/watch?v=pRGhrYmUjU4
> [3] http://dig.csail.mit.edu/2010/Papers/IAB-privacy/httpa.pdf
> [4] https://web.archive.org/web/20220000000000*/http://crosscloud.org/
> [5] https://www.youtube.com/watch?v=VtvjbmoDx-I
> [6] https://www.youtube.com/watch?v=2B-XwPjn9YY
> [7] https://www.youtube.com/watch?v=5sMBhDv4sik
> [8] https://www.youtube.com/watch?v=92NNyd3m79I
> <https://www.youtube.com/watch?v=92NNyd3m79I>
> [9]
> https://worldwideweb.cern.ch/browser/#http://info.cern.ch/hypertext/WWW/People.html
>
> [10] https://twitter.com/w3c/status/1105453516154433536
> [11] https://timeline.knightlab.com/
> [12]
> https://docs.google.com/document/d/11PowzV6lLZG1MbV5cVJiSRLNLiBKvNLB42pIw3OuURo/edit?usp=sharing
>
> [13]
> https://docs.google.com/spreadsheets/d/1rqYC2E2BDIHBADAT7-9CabawkmYBJpBBf1KJO24D7ig/edit?usp=sharing
>
> [14]
> https://drive.google.com/drive/folders/153mjj1yhDzS5_idCZOQtmg9BThu37zfW
> [15]
> https://docs.google.com/spreadsheets/d/1QWh8r2rkjrDHjjimKAGGHA-es6d__MQfoQuFV29XEiw/edit?usp=sharing
>
> [16] https://www.w3.org/TR/vc-data-model/#credential-subject-0
> [17]
> https://docs.google.com/presentation/d/1CpGf5S8JBQzCug7QzKDS4wzJdsSEFaRI1j3RunH5v10/edit?usp=sharing
>
> [18]
> https://docs.google.com/spreadsheets/d/19IEgvdvwl_EOGhmIFinVQu4OerRojeje8PaZWGvoO4Q/edit#gid=0
>
> [19]
> https://www.webizen.net.au/about/executive-summary/preserving-the-freedom-to-think/
>
> [20] https://www.nature.com/articles/438743a
>

Received on Tuesday, 15 November 2022 11:51:44 UTC