Re: Artificial Intelligence and Group Deliberation

Mark,

Thank you. With respect to AI and law, here are some hopefully interesting and useful hyperlinks:

  1.  International Association for Artificial Intelligence and Law (http://www.iaail.org/)
  2.  19th International Conference on Artificial Intelligence and Law - ICAIL 2023 (https://icail2023.di.uminho.pt/)
  3.  Jurix (http://jurix.nl/)
  4.  IBM Project Debater (https://research.ibm.com/interactive/project-debater/)
  5.  LexisNexis (https://www.lexisnexis.com/en-us/home.page)
  6.  LexisNexis: State and Local Government (https://www.lexisnexis.com/en-us/government/state-and-local-government.page)
  7.  LexisNexis: 2023 Generative AI & The Legal Profession (https://www.lexisnexis.com/pdf/lexisnexis-legal-ai-sample-report.pdf)

Some alternatives to LexisNexis are available here (https://craft.co/lexisnexis/competitors).


Best regards,
Adam

P.S.: AI in political campaigns is a separate can of worms...

________________________________
From: Mark Hampton <mark.hampton@ieee.org>
Sent: Monday, April 10, 2023 2:40 AM
To: Adam Sobieski <adamsobieski@hotmail.com>
Cc: public-humancentricai@w3.org <public-humancentricai@w3.org>
Subject: Re: Artificial Intelligence and Group Deliberation

Hi Adam,

I have the impression the legal system is already an arms race where those with a lot of resources do not get the same outcomes as those with few resources. There is also the capability of individual lawyers/teams that biases the system. Technology seems more likely to accentuate the problem rather than solve it. Consider a political system where resources are not limited during elections and I think we can see how this leads to a system serving the interests of those who concentrate large resources. There are other political systems where campaigns are limited to a certain cost and reimbursed by the state - a simple non-technical solution.

A fair justice system might need similar resource constraints rather than trying to maintain equilibrium in an arms race. For example, we could limit the resources available for a particular type of case. This could avoid situations where defendants are not given legal council with sufficient skill and/or resources.

I like your ideas for forum software and personal assistance. I've been thinking about this in regards to different contexts - in some situations the priority might be about effective communication (e.g. the user is not seeking a relationship) and in others the authenticity of "poor" communication can lead to desirable dynamics.

Coming back to the court room, I have a rather naive understanding of the legal system and it is interesting to note that in some systems the vast majority of cases are not actually argued because the prosecutor negotiates via plea bargaining to avoid arguing cases in court. I remember recently listening to someone who spent 20 odd years in a US prison for a murder that was committed while he was being held in jail after an arrest. The old technology of beating a confession out of someone still seems to work very well in some systems.

Kind regards,
   Mark

PS Here is the above message massaged by GPT-4 for a public forum conversation:


Dear All,

I hope you are well. Drawing from our recent discussions, I would like to explore the role of technology, particularly AI, in judicial and legislative systems and group deliberations. We must consider the potential benefits and drawbacks of integrating AI into these contexts, as well as the importance of maintaining fairness and resource balance.

The legal system often appears to be an arms race, where resource disparities impact outcomes and create biases. As technology advances, it may accentuate these issues rather than resolve them. For example, in political systems without resource constraints during elections, those with concentrated resources often dictate the system's priorities. In contrast, some political systems impose cost limits and provide state reimbursements, creating a fairer environment through non-technical solutions.

A fair justice system might similarly benefit from resource constraints rather than perpetuating an arms race. By limiting resources for specific case types, we can mitigate situations where defendants lack skilled legal counsel or sufficient resources.

When discussing forum software and personal assistance, context plays a crucial role. In some cases, effective communication may be the priority, while in others, the authenticity of "imperfect" communication can lead to more desirable dynamics. Equitable access to advanced AI tools, such as configurable versions of Grammarly, and ensuring technological proficiency among all participants is essential for a fair system.

In many jurisdictions, the majority of cases are resolved through plea bargaining rather than court trials, and outdated practices, such as coerced confessions, persist. These complexities highlight the need for careful consideration when introducing new technologies and standards into legal systems and group deliberations.

To foster human-centric AI in judicial, legislative, and group deliberation contexts, we must ensure equal access to advanced tools, promote technological proficiency, and maintain transparency in AI systems' usage. By addressing these concerns, we can contribute to more equitable and effective decision-making processes.

I am eager to hear your thoughts and engage in a productive dialogue on these topics.

Best regards,
    Mark



On Sat, Apr 8, 2023 at 5:11 PM Timothy Holborn <timothy.holborn@gmail.com<mailto:timothy.holborn@gmail.com>> wrote:
Mark,

Its fantastic to have you involved, and i look forward to getting stuck into the process of better understanding your work, which, imo, appears remarkable....

will follow-up l8r.  but, fwiw.  ack.

Timothy Holborn.


On Sun, 9 Apr 2023 at 00:51, Mark Lizar <mark@openconsent.com<mailto:mark@openconsent.com>> wrote:
Greetings HCAI,

It is nice to be apart of this group, thank you Tim for the invitation.   It is amazing to see this topic, finally break through into a standards group, and amazing to see the rapid growth in awareness of this space in the last couple of months.  Truly showing the potential of ai for bad or for planet saving goodness.

  I believe  digital equality should be inclusive and available to all and that this is required so that people can control their own data, where ever it may be.  A critical  digital freedom in my opinion,

Which is why I propose a focus on Digital Transparency standard,  that is  used to generate notice records and consent notice receipts, for every digital processing interaction, through any interaction with notice, notifications, disclosures, and even physical signs.

This  implementation approach, is one that proposes that a mirrored record of processing activities is generated from a notice (rather than tick box policy agreements)  in which people get what is essentially, a receipt.  A receipt, which (FYI)  is likely the oldest human form of human writing <https://numismatics.org/pocketchange/receipts/> and the first ever form of scripting. The receipt solves  for the issue of trust, and implemented to capture meta-data includes everyone.  equality in records.

My work has championed consent standards over the last decade as well as  the Consent Receipt information structure and technology. This has been developed out of research as a social scientist and as  digital identity engineer, I champion the consent and notice receipts  @ the Kantara Initiative.  Since the volunteer work began in 2013, it has successfully been included in ISO/IEC 29184 Online privacy notice and consent standard and is the basis for ISO/IEC 27560 - consent record information structure. (In progress). Which means it is a means that can implement the solution architecture for human centric AI.

To this point, as humans, consent is how we control data, and information in context, it is not an out of context tool (for us).   Systems, manage permissions, humans manage consent, which means we manage many permissions in context, contrary to what tech has us all believing.    Clarity in semantics, and the ontologies involved have been the  battle ground for these concepts.   As such it is remarkable we do not have our own records of digital relationships, so that we can govern our own data and manage AI.  We needs these records to start the arms race.

Digital Notice records are, and have been for a long time required, but not produced by industry.   We came close in 2015, but, now we have a better alternative at the Kantara Initiative ANCR WG<https://kantara.atlassian.net/wiki/spaces/WA/pages/164823104/ANCR+Digital+iDentity+Transparency+Framework+DT-Levels+of+Assurance> - with a human trust framework for making records and receipts that supercedes T&C’s.

ANCR - Digital Transparency Framework<https://kantara.atlassian.net/wiki/spaces/WA/pages/164823104/ANCR+Digital+iDentity+Transparency+Framework+DT-Levels+of+Assurance>, which uses an international adequacy and transparency baseline, has transparency performance indicator’s<https://kantara.atlassian.net/wiki/spaces/WA/pages/82542593/Trust+Performance+Indicator+s+TPI+s>, which generate PII Controller Credentials <https://kantara.atlassian.net/wiki/spaces/WA/pages/114098237/Open+Notice+Record+PII+Controller+Credential> (digital identity) that defined for public interest and inherently enable privacy Ai.

Imagine what we could do if we had our own records, and that every data processing activity produced a receipt which was added to these records.  We wouldn’t need to ask for our data to be deleted anymore, we could do that with multiple providers, in context with the click of one button.  We would have personalized digital privacy policies, not privacy policies for contracts (T&C’s)

With a consent based authorization protocol, (like AuthC<https://kantara.atlassian.net/wiki/spaces/WA/pages/38174721/AuthC+Protocol>) we wouldn’t need logins for every service either.  We could use verified credentials validated by regulated 3rd party governing authorities,

 For high assurance active privacy state monitoring, using our own records would help scale trust online (the main use of privacy ai),   Every interaction we have with any AI would then feeds our own, private ai, our personal micro-data, and our more intelligent data could then be pernissioned according to a common set of rules, (digital privacy Magna Carta) and together we could become a smart species. (Thats the vision anyway)

Which is why I urge a starting point in which a Digital Privacy AI is implemented as a Co-Regulatory framework, so that we can stop making everyone fill out there personal information for every service everywhere.  To start a network protocol where processing of personal data requires a Controller credential so there are no unknown 3rd parties. Then we control our own data sources.   This promising approach to regulate AI, also addresses mis-information at scale, and provide the foundation framework to govern  future tech, like quantum and government AI’s.

This year in 2023, we finally have the laws, standards, and with the Jan 4 decision pro-consent against Meta, the legal precedence to implement global market strength digital transparency, digital rights and data control,

Nice to meet you all,

Mark Lizar

PS -  I have  a W3C Do Track - Digital Consent Use Case for Private AI, if anyone is interested?  The idea is to cut out the commercial intermediaries like  google and Facebook, but with our own Privacy AI.  An AI that uses interactional privacy law to leverage  the data we have in these services, for our own empowering services. (Independent of big tech)









On Apr 8, 2023, at 9:23 AM, Adam Sobieski <adamsobieski@hotmail.com<mailto:adamsobieski@hotmail.com>> wrote:

Mark,

Thank you for the useful hyperlink about differentiable technological development.


I see your points about personalization, preferences, and configurability. It seems that we can envision something like a configurable advanced Grammarly which would be equally available to all participants, councilmembers, representatives, legislators, and Congresspeople. Perhaps the composition-related functionalities in Web browsers' AI-enhanced sidebars may evolve into something similar.

When I consider lawyers in courtrooms and participants in group deliberations (e.g., legislators) what comes to my mind, in this particular discussion, is that I hope that they are all, or that their staffs are all, equally technologically proficient. Imagine a courtroom where one party’s lawyer had a laptop connected to a suite of advanced tools via Wi-Fi Internet while the other lawyer had a briefcase with a stack of papers.

Judicial and legislative systems seem to be more intuitively fair when all of the participants are equally proficient and equally equipped with preparation-related and performance-related tools. There could even be "arms race" scenarios in terms of speechwriting assistants, debate coaches, and other preparation-enhancing and performance-enhancing tools.

Arguments for advanced forum software (e.g., features delivered via plugins for social media software) include, but are not limited to, that:


  1.  Some of these tools should obtain and process discussion contexts to provide better composition-related advice to each user.
     *   However, per-user tools like advanced Grammarly or Web browser sidebar composition tool could, in theory, scrape the entireties of discussion threads, from webpages, up to the posts being authored (perhaps utilizing HTML5 document metadata, Web schema, or linked data) to deliver better advice requiring discussion contexts.
  2.  Some of the tools under discussion are useful for evaluating the posts of other users or evaluating combinations of posts from multiple users, entire threads, as they unfold.



Best regards,

Adam

________________________________
From: Mark Hampton <mark.hampton@ieee.org<mailto:mark.hampton@ieee.org>>
Sent: Saturday, April 8, 2023 7:14 AM
To: Adam Sobieski <adamsobieski@hotmail.com<mailto:adamsobieski@hotmail.com>>
Cc: public-humancentricai@w3.org<mailto:public-humancentricai@w3.org> <public-humancentricai@w3.org<mailto:public-humancentricai@w3.org>>
Subject: Re: Artificial Intelligence and Group Deliberation

Hi Adam,

I imagine those tools would need to be personalized (I'd prefer owned by the user), one person's propaganda is another's preference. There would be broader dynamics that could be spotted through collaboration of those personalized systems sharing information or using shared sources of information.

There is an almost certain risk of information overload and that seems to make people more manipulable. If human centric means caring for humans then I think we need to be careful. Human centric AI could become a way of accelerating AI rather than caring for humans - I would really like to see human centric AI leading tohttps://en.wikipedia.org/wiki/Differential_technological_development rather than mitigations for inhuman technologies.

The current direction of technological progress does not seem very human centric at all. The work to build technical solutions to problems introduced by technology seems to be a symptom of this. I don't see any current very important short/medium term human material problems that need AI but I'm open to being convinced otherwise. Technologists (and I speak as one) risk to have a hard time accepting they are part of the problem rather than the solution.

An off the cuff reaction but I hope it is of some use to you.

Kind regards,
  Mark


On Sat, Apr 8, 2023 at 12:57 AM Adam Sobieski <adamsobieski@hotmail.com<mailto:adamsobieski@hotmail.com>> wrote:
Human-centric AI Community Group,

Something that Timothy Holborn said in a recent letter to this mailing list reminded me of some thoughts that I had about AI a few years ago. At that time, I was considering uses of AI technology for supporting city-scale e-democracies and e-townhalls. I collated a preliminary non-exhaustive list of tasks that AI could perform to enhance public discussion forums:

  1.  Performing fact-checking
  2.  Performing argument analysis
  3.  Detecting spin, persuasion, and manipulation
  4.  Performing sentiment analysis
  5.  Detecting frame building and frame setting
  6.  Detecting agenda building and agenda setting
  7.  Detecting various sociolinguistic, social semiotic, sociocultural and memetic events
  8.  Detecting the dynamics of the attention of individuals, groups and the public
  9.  Detecting occurrences of cognitive biases in individual and group decision-making processes

With respect to point 3, a worry is that some participants in a community might make use of AI tools to amplify the rhetoric used to convey their points of view. These were concerns about technologies like: "virtual speechwriting assistant" and "virtual debate coach".

Some participants of an e-townhall or social media forum might make use of AI tools to spin, to persuade, to manipulate the other members for their own reasons or interests or might do so on behalf of other parties who would pay them.

My thoughts were that technologies could mitigate these technological concerns. Technologies could monitor large-scale group discussions, on behalf of the participants, while serving as tools available to all of the participants. For example, AI could warn content posters before they posted contentious content (contentious per their agreed-upon rules) and subsequently place visible icons on contentious posts, e.g., content detected to contain spin, persuasion, or manipulation.

I was brainstorming about solutions where AI systems could enhance group deliberation, could serve all of the participants simultaneously and in an open and transparent manner, and could ensure that reason prevailed from group discussions and deliberations. Today, with tools like GPT-4, some of these thoughts about humans and AI systems interoperating in public forums, e-townhall forums and social media, seem to be once again relevant. Any thoughts on these topics?


Best regards,
Adam Sobieski

Received on Monday, 10 April 2023 07:42:49 UTC