Re: Artificial Intelligence and Group Deliberation

Greetings HCAI,

It is nice to be apart of this group, thank you Tim for the invitation.   It is amazing to see this topic, finally break through into a standards group, and amazing to see the rapid growth in awareness of this space in the last couple of months.  Truly showing the potential of ai for bad or for planet saving goodness.

  I believe  digital equality should be inclusive and available to all and that this is required so that people can control their own data, where ever it may be.  A critical  digital freedom in my opinion,

Which is why I propose a focus on Digital Transparency standard,  that is  used to generate notice records and consent notice receipts, for every digital processing interaction, through any interaction with notice, notifications, disclosures, and even physical signs.

This  implementation approach, is one that proposes that a mirrored record of processing activities is generated from a notice (rather than tick box policy agreements)  in which people get what is essentially, a receipt.  A receipt, which (FYI)  is likely the oldest human form of human writing <https://numismatics.org/pocketchange/receipts/> and the first ever form of scripting. The receipt solves  for the issue of trust, and implemented to capture meta-data includes everyone.  equality in records.

My work has championed consent standards over the last decade as well as  the Consent Receipt information structure and technology. This has been developed out of research as a social scientist and as  digital identity engineer, I champion the consent and notice receipts  @ the Kantara Initiative.  Since the volunteer work began in 2013, it has successfully been included in ISO/IEC 29184 Online privacy notice and consent standard and is the basis for ISO/IEC 27560 - consent record information structure. (In progress). Which means it is a means that can implement the solution architecture for human centric AI.

To this point, as humans, consent is how we control data, and information in context, it is not an out of context tool (for us).   Systems, manage permissions, humans manage consent, which means we manage many permissions in context, contrary to what tech has us all believing.    Clarity in semantics, and the ontologies involved have been the  battle ground for these concepts.   As such it is remarkable we do not have our own records of digital relationships, so that we can govern our own data and manage AI.  We needs these records to start the arms race.

Digital Notice records are, and have been for a long time required, but not produced by industry.   We came close in 2015, but, now we have a better alternative at the Kantara Initiative ANCR WG<https://kantara.atlassian.net/wiki/spaces/WA/pages/164823104/ANCR+Digital+iDentity+Transparency+Framework+DT-Levels+of+Assurance> - with a human trust framework for making records and receipts that supercedes T&C’s.

ANCR - Digital Transparency Framework<https://kantara.atlassian.net/wiki/spaces/WA/pages/164823104/ANCR+Digital+iDentity+Transparency+Framework+DT-Levels+of+Assurance>, which uses an international adequacy and transparency baseline, has transparency performance indicator’s<https://kantara.atlassian.net/wiki/spaces/WA/pages/82542593/Trust+Performance+Indicator+s+TPI+s>, which generate PII Controller Credentials <https://kantara.atlassian.net/wiki/spaces/WA/pages/114098237/Open+Notice+Record+PII+Controller+Credential> (digital identity) that defined for public interest and inherently enable privacy Ai.

Imagine what we could do if we had our own records, and that every data processing activity produced a receipt which was added to these records.  We wouldn’t need to ask for our data to be deleted anymore, we could do that with multiple providers, in context with the click of one button.  We would have personalized digital privacy policies, not privacy policies for contracts (T&C’s)

With a consent based authorization protocol, (like AuthC<https://kantara.atlassian.net/wiki/spaces/WA/pages/38174721/AuthC+Protocol>) we wouldn’t need logins for every service either.  We could use verified credentials validated by regulated 3rd party governing authorities,

 For high assurance active privacy state monitoring, using our own records would help scale trust online (the main use of privacy ai),   Every interaction we have with any AI would then feeds our own, private ai, our personal micro-data, and our more intelligent data could then be pernissioned according to a common set of rules, (digital privacy Magna Carta) and together we could become a smart species. (Thats the vision anyway)

Which is why I urge a starting point in which a Digital Privacy AI is implemented as a Co-Regulatory framework, so that we can stop making everyone fill out there personal information for every service everywhere.  To start a network protocol where processing of personal data requires a Controller credential so there are no unknown 3rd parties. Then we control our own data sources.   This promising approach to regulate AI, also addresses mis-information at scale, and provide the foundation framework to govern  future tech, like quantum and government AI’s.

This year in 2023, we finally have the laws, standards, and with the Jan 4 decision pro-consent against Meta, the legal precedence to implement global market strength digital transparency, digital rights and data control,

Nice to meet you all,

Mark Lizar

PS -  I have  a W3C Do Track - Digital Consent Use Case for Private AI, if anyone is interested?  The idea is to cut out the commercial intermediaries like  google and Facebook, but with our own Privacy AI.  An AI that uses interactional privacy law to leverage  the data we have in these services, for our own empowering services. (Independent of big tech)









On Apr 8, 2023, at 9:23 AM, Adam Sobieski <adamsobieski@hotmail.com<mailto:adamsobieski@hotmail.com>> wrote:

Mark,

Thank you for the useful hyperlink about differentiable technological development.


I see your points about personalization, preferences, and configurability. It seems that we can envision something like a configurable advanced Grammarly which would be equally available to all participants, councilmembers, representatives, legislators, and Congresspeople. Perhaps the composition-related functionalities in Web browsers' AI-enhanced sidebars may evolve into something similar.

When I consider lawyers in courtrooms and participants in group deliberations (e.g., legislators) what comes to my mind, in this particular discussion, is that I hope that they are all, or that their staffs are all, equally technologically proficient. Imagine a courtroom where one party’s lawyer had a laptop connected to a suite of advanced tools via Wi-Fi Internet while the other lawyer had a briefcase with a stack of papers.

Judicial and legislative systems seem to be more intuitively fair when all of the participants are equally proficient and equally equipped with preparation-related and performance-related tools. There could even be "arms race" scenarios in terms of speechwriting assistants, debate coaches, and other preparation-enhancing and performance-enhancing tools.

Arguments for advanced forum software (e.g., features delivered via plugins for social media software) include, but are not limited to, that:


  1.  Some of these tools should obtain and process discussion contexts to provide better composition-related advice to each user.
     *   However, per-user tools like advanced Grammarly or Web browser sidebar composition tool could, in theory, scrape the entireties of discussion threads, from webpages, up to the posts being authored (perhaps utilizing HTML5 document metadata, Web schema, or linked data) to deliver better advice requiring discussion contexts.
  2.  Some of the tools under discussion are useful for evaluating the posts of other users or evaluating combinations of posts from multiple users, entire threads, as they unfold.



Best regards,

Adam

________________________________
From: Mark Hampton <mark.hampton@ieee.org<mailto:mark.hampton@ieee.org>>
Sent: Saturday, April 8, 2023 7:14 AM
To: Adam Sobieski <adamsobieski@hotmail.com<mailto:adamsobieski@hotmail.com>>
Cc: public-humancentricai@w3.org<mailto:public-humancentricai@w3.org> <public-humancentricai@w3.org<mailto:public-humancentricai@w3.org>>
Subject: Re: Artificial Intelligence and Group Deliberation

Hi Adam,

I imagine those tools would need to be personalized (I'd prefer owned by the user), one person's propaganda is another's preference. There would be broader dynamics that could be spotted through collaboration of those personalized systems sharing information or using shared sources of information.

There is an almost certain risk of information overload and that seems to make people more manipulable. If human centric means caring for humans then I think we need to be careful. Human centric AI could become a way of accelerating AI rather than caring for humans - I would really like to see human centric AI leading tohttps://en.wikipedia.org/wiki/Differential_technological_development rather than mitigations for inhuman technologies.

The current direction of technological progress does not seem very human centric at all. The work to build technical solutions to problems introduced by technology seems to be a symptom of this. I don't see any current very important short/medium term human material problems that need AI but I'm open to being convinced otherwise. Technologists (and I speak as one) risk to have a hard time accepting they are part of the problem rather than the solution.

An off the cuff reaction but I hope it is of some use to you.

Kind regards,
  Mark


On Sat, Apr 8, 2023 at 12:57 AM Adam Sobieski <adamsobieski@hotmail.com<mailto:adamsobieski@hotmail.com>> wrote:
Human-centric AI Community Group,

Something that Timothy Holborn said in a recent letter to this mailing list reminded me of some thoughts that I had about AI a few years ago. At that time, I was considering uses of AI technology for supporting city-scale e-democracies and e-townhalls. I collated a preliminary non-exhaustive list of tasks that AI could perform to enhance public discussion forums:

  1.  Performing fact-checking
  2.  Performing argument analysis
  3.  Detecting spin, persuasion, and manipulation
  4.  Performing sentiment analysis
  5.  Detecting frame building and frame setting
  6.  Detecting agenda building and agenda setting
  7.  Detecting various sociolinguistic, social semiotic, sociocultural and memetic events
  8.  Detecting the dynamics of the attention of individuals, groups and the public
  9.  Detecting occurrences of cognitive biases in individual and group decision-making processes

With respect to point 3, a worry is that some participants in a community might make use of AI tools to amplify the rhetoric used to convey their points of view. These were concerns about technologies like: "virtual speechwriting assistant" and "virtual debate coach".

Some participants of an e-townhall or social media forum might make use of AI tools to spin, to persuade, to manipulate the other members for their own reasons or interests or might do so on behalf of other parties who would pay them.

My thoughts were that technologies could mitigate these technological concerns. Technologies could monitor large-scale group discussions, on behalf of the participants, while serving as tools available to all of the participants. For example, AI could warn content posters before they posted contentious content (contentious per their agreed-upon rules) and subsequently place visible icons on contentious posts, e.g., content detected to contain spin, persuasion, or manipulation.

I was brainstorming about solutions where AI systems could enhance group deliberation, could serve all of the participants simultaneously and in an open and transparent manner, and could ensure that reason prevailed from group discussions and deliberations. Today, with tools like GPT-4, some of these thoughts about humans and AI systems interoperating in public forums, e-townhall forums and social media, seem to be once again relevant. Any thoughts on these topics?


Best regards,
Adam Sobieski

Received on Saturday, 8 April 2023 14:51:50 UTC