GenerativeAI Use-Case: A Sensitive Use-Case Processor ??

Hi all,

I have been thinking about how to address the issue of how we might find a
method to tackle use-cases that relate to traumatic, private & personal
lived experiences. I think i've got an idea..

At the core of my work on Human Centric AI Solutions, is a desire to attend
to various foundational 'safety protocols', so that implementers can be
equipped with broader 'ecosystem' needs, to empower their ability to
implement solutions, without compromising what might be considered moral
duties, or responsibilities. For example, in the system I want to deliver,
i want to have 'values credentials' which are basically thought of as human
rights like instruments, for people to make agreements between one-another.
Another example is that I'd like to ensure that if someone needs to leave
the ecosystem version i'm creating, that they're able to continue to
operate their information systems on an alternative provider... but these
are two ideas, relating to many different use cases.

I think the ones relating to the lower-levels of
https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs related use-cases
are most important. However, one of the problems is that the most important
use-cases may not be easily discussed publicly.

In 2017 ( https://2017.trustfactory.org/ ) i was thankful Andrews was
involved, his works (see videos:
https://www.youtube.com/@Andrewmmacleod/videos ) illustrate only some of
the difficult use-cases that exist, requiring 'safety protocols' in my
opinion...

I have been troubled by the considerations of how they might be attended
to, given the nature of some of these sorts of issues and that I was
struggling to come-up with a solution that could ensure that they were able
to be discussed in the W3C Human Centric AI Group Lists, etc. Then i had an
idea, what if we process the user-stories via a generative AI agent, and
thereby seek to reduce and/or alter the user-story in a way that provides
us the use-cases we need to then be able to log those use-cases via W3C
systems, publically, to process as part of the requirements via the
prescribed processes. I'm thinking that the main thing we need is the
use-cases, although I'm not entirely sure what the best approach is -
yet... So, here are some thoughts about it now...

A Sensitive Use-Case Processor

I've then used the opportunity to test out https://bard.google.com/ - the
below email is edited from a draft provided by bard, the link to the google
doc i've copy/paste the convo to the google link below,

https://docs.google.com/document/d/1XUaK6UenHmz3UhmGYnp2dsUWeEfVRyqBkz6_a2S6po0/edit?usp=sharing

Note - the doc illustrates various issues associated to the way bard
processed the concept, being different from the intended purpose.

Concept Summary:

The project concept is to create a generative AI tool that can be used to
process private and personal traumatic experiences in a safe way, using an
appropriate tool; for the purpose of producing use-cases to consider the
key issues illustrated by the private user-story.

In-order to consider use-cases relating to matters of a private and/or
sensitive nature, somehow it needs to be described publically in-order to
provide the inputs needed to then be able to work on solutions that can
address those issues.

By creating a generative AI tool that can process the private and personal
information contained within the user-story relating to a person’s
traumatic experiences and anonymize a derivative summary of the key issues
associated with that user-story, as well as providing a group of use cases
that could be added to the web-standards work to-do list, the means to
address these problems could be addressed.

The tool would need to remove any identifying information, and then
generate a summary of the key issues associated with the story.

This summary would be in a format that could be shared publicly, alongside
the use-cases related to defining means to address those problems.

This approach would be intended to support our means to articulate and
better understand the problems we are seeking to address.

Through the use of anonymized user-stories or moreover derivative
use-cases, the outcome would aim to act to improve our understanding of the
challenges faced by people who have experienced trauma or other difficult
events, and then seek to produce better tools to address these problems
than was able to be done without a means to describe them. This could lead
to the development of new and more effective solutions to human rights
related problems.

Improved "Human Centric AI" web standards: By anonymizing and distilling
private and personal traumatic experiences into a format that allows them
to help inform our process of designing fit-for-purpose solutions online
publically; the description of problems is an essential part of the process
for defining use-cases related to defining means to address those problems,
that technology can be made to create solutions to address - that would not
have otherwise been created to address problems, that have not been
described.

I believe the general concept of this tool has the potential to make a
positive impact on our ability to do useful work that can in-turn act to
deliver better outcomes for the lives of many people. As such, the general
idea and/or theory is; that by using generative AI to help people process
confidential contributions in a safe, sensitive and dignified way, we can
do work to address problems that are otherwise not able to be discussed.

If you like the general idea and/or have feedback, suggestions, concerns
and/or productive input; let me know your thoughts on how advancing this
idea to create something useful and fit-for-purpose with respect to the
Human Centric AI works.

Please let me know if you are interested in discussing and/or helping to
advance this concept further and/or any objections that may exist. More
broadly, I'm not sure how best to implement it yet. It'll require more
work, but I think this might be useful progress.  Its certainly very
difficult to define solutions to problems that cannot be described.

Cheers,

Timothy Holborn.

Received on Monday, 15 May 2023 00:55:59 UTC