- From: Adam Sobieski <adamsobieski@hotmail.com>
- Date: Fri, 7 Apr 2023 22:57:37 +0000
- To: "public-humancentricai@w3.org" <public-humancentricai@w3.org>
- Message-ID: <PH8P223MB06751CB1FCA0E1499A536A67C5969@PH8P223MB0675.NAMP223.PROD.OUTLOOK.COM>
Human-centric AI Community Group, Something that Timothy Holborn said in a recent letter to this mailing list reminded me of some thoughts that I had about AI a few years ago. At that time, I was considering uses of AI technology for supporting city-scale e-democracies and e-townhalls. I collated a preliminary non-exhaustive list of tasks that AI could perform to enhance public discussion forums: 1. Performing fact-checking 2. Performing argument analysis 3. Detecting spin, persuasion, and manipulation 4. Performing sentiment analysis 5. Detecting frame building and frame setting 6. Detecting agenda building and agenda setting 7. Detecting various sociolinguistic, social semiotic, sociocultural and memetic events 8. Detecting the dynamics of the attention of individuals, groups and the public 9. Detecting occurrences of cognitive biases in individual and group decision-making processes With respect to point 3, a worry is that some participants in a community might make use of AI tools to amplify the rhetoric used to convey their points of view. These were concerns about technologies like: "virtual speechwriting assistant" and "virtual debate coach". Some participants of an e-townhall or social media forum might make use of AI tools to spin, to persuade, to manipulate the other members for their own reasons or interests or might do so on behalf of other parties who would pay them. My thoughts were that technologies could mitigate these technological concerns. Technologies could monitor large-scale group discussions, on behalf of the participants, while serving as tools available to all of the participants. For example, AI could warn content posters before they posted contentious content (contentious per their agreed-upon rules) and subsequently place visible icons on contentious posts, e.g., content detected to contain spin, persuasion, or manipulation. I was brainstorming about solutions where AI systems could enhance group deliberation, could serve all of the participants simultaneously and in an open and transparent manner, and could ensure that reason prevailed from group discussions and deliberations. Today, with tools like GPT-4, some of these thoughts about humans and AI systems interoperating in public forums, e-townhall forums and social media, seem to be once again relevant. Any thoughts on these topics? Best regards, Adam Sobieski
Received on Friday, 7 April 2023 22:57:45 UTC