Re: the intersection between AIKR and COGAI

Just a quick note...

I haven't had enough time to fully review the thread history (apologies)
but hope this helps.

Historically - over a very long period of time (since 2000); my - perhaps
dogmatic focus - has been on the idea of 'information banks' or 'knowledge
banks' for human beings; which has proven to be - kinda impossible - at
least - so far...

some of the historical works considered 'hybrid tv' as an approach; which
involved various VOD/IPTV related works in the early 00's, leading to
'project kangaroo', and consequentially thereafter 'hbbtv' which has since
evolved.  part of those works, considered the idea of 'hypermedia content
packages' for distribution between 'channel providers' - incorporating both
linear & non-linear content, etc.  in a standardised way (rather than
digibetas); i see,
https://www.bbc.co.uk/rd/blog/2019-06-bbc-box-personal-data-privacy has
progressed:
https://inrupt.com/blog/the-bbc-uses-inrupts-solid-server-to-deliver-viewers-a-personalized-but-private-watch-party-experience


noting: https://lists.w3.org/Archives/Public/public-rww/2014Oct/0003.html
some other related artifacts:
https://drive.google.com/drive/folders/1lV-Ruj9Gehwvs7B3wDLd6fmKIIvCmOqt

Yet, much to my frustration - too many barriers have been proven to have
been too difficult to overcome - as yet - to provide a meaningful way to
create better support for natural persons to have a secure - digital vault;
and in-turn, a means to define their own 'AI' 'digital twin' (or so the
term is now available to be employed).

Part of my 'research' (although not entirely 'willingly' or in a manner
considered 'desirable') has led to a much greater understanding of
'dissociative' considerations, such as 'DID
https://en.wikipedia.org/wiki/Dissociative_identity_disorder |  modules 3&4
of
https://www.unodc.org/unodc/en/human-trafficking/2009/anti-human-trafficking-manual.html
i found to be particularly useful when seeking to describe issues, that we
all hope - the vast majority do not understand as a consequence of having
never seen it, or having been in a situation of seeking to address issues
similar to that which is so hard to describe, without resources such as the
one noted above.

Also - I am mindful of the "rigorous debate" / dispute process - that
played out between various groups; at the early stages of the 'credentials'
work; where WebID-TLS / WebID (and at the time - i didn't really understand
the concept of 'foaf' as a protocol - only found that recently via
archive.org works, about something else) advocates, suggested that there's
only a concept of 'one self', as may be considered distinct to what i
thought of at the time as - personas (although thinking has evolved since);
vs, what i guess to be described now as an objective to ensure an
'integrated self' (rather than disjoint 'multiple personalities'); yet,

this has been via the lens - at least for me - of seeking to create
stewardship / ownership - over 'thy digital self'; as such, when i've
thought about 'ai avatars' and such things, i've always thought of them -
as an extension of self, rather than - and this is the point....

that there's another way of looking at it ; which may indeed, have more
efficacy overall.  and that is, about the concept of people owning their
own 'robots' as property.

I haven't put enough thought into this 'new tangent' or direction, that may
- i hope - help to provide better rationalisation of a viable approach that
can support the notions i have broadly termed 'human centric' - although,
the intended meanings not universally similar to others promoted well,
since (2015/6); thereafter,

back in 2000-2 - working with Sun Microsystems & others - their systems
were always about 'thin client' models; and the stories about the advent of
'desktop computing' (ie: apple, etc.) are well known - as are the
ideological differences in these designs (thin client vs democratisation of
computing); so, i have a task - about looking to review all the different
types of 'robots' that have been described in historical works (mostly film
/ tv) as to forge a means to communicate the concept of 'people owning
their own robots';that is - in-effect,

rather than trying to preserve human dignity by seeking to extend our
organic self via a prosthesis (something i understand well, as i've had a
prosthetic eye since i was ~18 months old); perhaps a way to improve the
lived experiences of people - is to build solutions that enable people to
'own their own robot' - or in other words (re: 'metaverse') that the
'avatar' isn't intended to be them - its intended to be something they own,
that works for them - and that person, might have a bunch of them...

Which then feeds into Dave's theorems (as far as i can tell?) around
'artificial minds' as a concept; and thereby - via W3C works - how to
define solutions that ensure 'compatibility' between different vendors
providing different software that supports the construction & use of
people - having their own 'robots' - as personally owned property.

a few examples include;

'clippy': https://the-microsoft-agent.fandom.com/wiki/Clippy
R2D2: https://www.youtube.com/watch?v=JLmOteqmDYc
Johnny Five: https://www.youtube.com/watch?v=l0zmCUVB0Yw
Wall-E: https://www.youtube.com/watch?v=QHH3iSeDBLo

Conversely
Hal: https://www.youtube.com/watch?v=Mme2Aya_6Bc
Terminator: https://www.youtube.com/watch?v=tYc2jQaM8gM

of course there's many more; and the work will probably require the
construction of some sort of taxonomy / ontology to illustrate the
differences (both in film & in the real-world) & i'm not sure what may
already exist to assist in getting this task done; yet, noting,

the audience - i think - isn't simply 'ontologists' and luminaries such as
Dave Raggett / W3C Peeps (etc); rather, there's a diverse array of
'stakeholders' many who may have areas of expertise in fields of great
importance - yet - that they're not really able to use technology very
well  (ie: have trouble using their phones, or store their passwords in a
word-document stored on the desktop of their windows environment - at home
or at work... ).

I think also - somewhat related / but also - different - I think there's a
significant difference between the 'web of data' and the relationship
between 'the web' & the growth of it; vs. IRC, DNS, Blockchain(s), etc.

Whilst 'the web' may grow to exist on many protocols other than HTTP(s);
and whilst some have suggested a shift from 'platforms' to 'protocols'; as
i've been thinking about it ALOT - there's a clear difference between the
'web of data' stuff (progression of 'semantic web' in-effect) & other ways
networking can be performed; including but not limited to, other protocols
that are built on-top of IPv4/IPv6 / internet protocol...

I understand I've noted a few different constituencies to 'ecosystems'
considerations broadly; some more complex / complex in different ways - to
others...  The note of most importance i thought usefully provided - was
that - i've shifted my position to be less myopically focused on seeking to
achieve 'human agency' via ownership of one's own 'digital self' - towards
a different types of 'modality' relating to how, works might be considered
through the lens of 'owning your own robot' - which isn't only about
natural persons - but certainly, they shouldn't be excluded as
beneficiaries of the advancement of 'human rights' 'values'...

[image: article27.png]
and whilst i'm still working on: my UDHR 'test case' for 'values
credentials' to be usefully made available for use in connection to the
'digital identity' 'wallet' (that define us) functionality - so that people
can have 'values credentials' that they can employ when engaging in
electronic contracts (contract law) online - re: 'digital identity';

a not very 'well crafted' example being;
 https://webcivics.github.io/ontologies/un/UDHR/test/personhood/ has a
bunch of data behind it;
https://validator.schema.org/#url=https%3A%2F%2Fwebcivics.github.io
%2Fontologies%2Fun%2FUDHR%2Ftest%2Fpersonhood%2F

Perhaps - these sorts of instruments are important for 'our robots' to have
as part of their instruction set - that, like R2D2, they use to ensure
machines are working, that doors can be opened, etc...  perhaps that
method, will in-turn be found to be more achievable; than seeking
'platforms' & commercial operators, invest in supporting the means for
people to 'own their data' / 'digital self', etc.  as that objective,
hasn't seemingly been provided much support over the decades i've sought an
outcome, where the availability of a solution to support the human rights
of people - particularly vulnerable people - just hasn't materialised;
regardless of the advancement of tooling to do so.  As such, whether simply
as an interim measure or otherwise - perhaps, a means to ensure - like the
availability of the first 'desktop computers'
- Apple Advertisement: https://www.youtube.com/watch?v=mMYsGdDssvk
- Jobs Launching Macintosh: https://www.youtube.com/watch?v=2B-XwPjn9YY
- Apple - Think different: https://www.youtube.com/watch?v=5sMBhDv4sik

Perhaps what we need to do is actively 'design' this new synthetic
'species' in a way that can reasonably provide us confidence; that they'll
be of service to us & our human family.

Timothy Holborn




On Fri, 28 Oct 2022 at 11:05, Paola Di Maio <paoladimaio10@gmail.com> wrote:

> Thank you all for contributing to the discussion
>
> the topic is too vast - Dave I am not worried if we aree or not agree, the
> universe is big enough
>
> To start with I am concerned whether we are talking about the same thing
> altogether. The expression human level intelligence is often used to
> describe tneural networks, but that is quite ridiculous comparison. If the
> neural network is supposed to mimic human level intelligence, then we
> should be able to ask; how many fingers do humans have?
> But this machine is not designed to answer questions, nor to have this
> level of knowledge about the human anatomy. A neural network is not AI in
> that sense
> it fetches some images and mixes them without any understanding of what
> they are
> and the process of what images it has used, why and what rationale was
> followed for the mixing is not even described, its probabilistic. go figure.
>
> Hay, I am not trying to diminish the greatness of the creative neural
> network, it is great work and it is great fun. But a) it si not an artist.
> it does not create something from scratch b) it is not intelligent really,
> honestly,. try to have a conversation with a nn
>
> This is what KR does: it helps us to understand what things are and how
> they work
> It also helps us to understand if something is passed for what it is not
> *(evaluation)
> This is is why even neural network require KR, because without it, we don
> know what it is supposed
> to do, why and how and whether it does what it is supposed to do
>
> they still have a role to play in some computation
>
> * DR Knowledge representation in neural networks is not transparent, *
>> *PDM I d say that either is lacking or is completely random*
>>
>>
>> DR Neural networks definitely capture knowledge as is evidenced by their
>> capabilities, so I would disagree with you there.
>>
>
> PDM  capturing knowledge is not knowledge representation, in AI,
> capturing knowledge is only one step, the categorization of knowledge is
> necessary to the reasoning
>
>
>
>
>
>
>> *We are used to assessing human knowledge via examinations, and I don’t
>> see why we can’t adapt this to assessing artificial minds *
>> because assessments is very expensive, with varying degrees of
>> effectiveness, require skills and a process -  may not be feasible when AI
>> is embedded to test it/evaluate it
>>
>>
>> We will develop the assessment framework as we evolve and depend upon AI
>> systems. For instance, we would want to test a vision system to see if it
>> can robustly perceive its target environment in a wide variety of
>> conditions. We aren’t there yet for the vision systems in self-driving cars!
>>
>> Where I think we agree is that a level of transparency of reasoning is
>> needed for systems that make decisions that we want to rely on.  Cognitive
>> agents should be able to explain themselves in ways that make sense to
>> their users, for instance, a self-driving car braked suddenly when it
>> perceived a child to run out from behind a parked car.  We are less
>> interested in the pixel processing involved, and more interested in whether
>> the perception is robust, i.e. the car can reliably distinguish a real
>> child from a piece of newspaper blowing across the road where the newspaper
>> is showing a picture of a child.
>>
>> It would be a huge mistake to deploy AI when the assessment framework
>> isn’t sufficiently mature.
>>
>> Best regards,
>>
>> Dave Raggett <dsr@w3.org>
>>
>>
>>
>>

Received on Friday, 28 October 2022 02:37:25 UTC