Re: AI catfishing [was Re: ChatGPT and ontologies]

If something can be used for misinformation, it will be used for
misinformation. In spite of the current hype around LLM's they remain much
more applicable for misinformation than they are applicable for certainty
of facts. That will remain true for the foreseeable future.

On Thu, Feb 16, 2023, 9:26 AM Owen Ambur <owen.ambur@verizon.net> wrote:

> The White House/OSTP "blueprint" for an AI bill of rights is now available
> in StratML format at https://stratml.us/drybridge/index.htm#AIBR
>
> From my perspective, these are key elements:
>
> Objective 4.5: Reporting
> <https://stratml.us/docs/AIBR.xml#_4b4a63de-adb3-11ed-8515-bde60383ea00>
> ~ Regularly provide public reports in a clear and machine-readable manner
>
> Objective 6.5.1.5: Goals
> <https://stratml.us/docs/AIBR.xml#_4b4b3868-adb3-11ed-8515-bde60383ea00>
> ~ Document the goals and assess achievement of them
>
> Objective 6.5.1.8: Clarity & Machine-Readability
> <https://stratml.us/docs/AIBR.xml#_4b4b475e-adb3-11ed-8515-bde60383ea00>
> ~ Issue reports in a clear and machine-readable manner
>
>
> The developers of AI applications should not need to be directed by law to
> realize those objectives.  They should hold themselves accountable, and if
> they fail to do so, their peers in the AI community should call them out
> for engaging in anti-social behavior.
>
> In the meantime, the last thing we need is for them to be loading us down
> with more walls of text, much less false images and videos.  While ChatGPT
> says that's a subjective question, You.com enthusiatically acknowledges it:
> https://www.linkedin.com/pulse/what-world-needs-now-owen-ambur/
>
> Based upon their responses, which of them do you think I find more
> trustworthy?
>
> BTW, the same is true of laws and regulations.  The last thing we need is
> for politicians to be loading us down with more of them in obscure
> narrative formats that no one has time to read, much less comprehend and
> act upon.  They are poor substitutes for model performance plans rendered
> in open, standard, machine-readable format, with clearly specified goals,
> objectives, stakeholder roles, and performance indicators.
>
> Owen Ambur
> https://www.linkedin.com/in/owenambur/
>
>
> On Wednesday, February 15, 2023 at 02:08:06 AM EST,
> ProjectParadigm-ICT-Program <metadataportals@yahoo.com> wrote:
>
>
> I have been following discussions in both W3C and other mailing lists and
> online about A(G)I and ChatGPT, from the perspective of mathematics,
> computer science, philosophy, academia and academic publishing and data
> rights and privacy watchdogs.
>
> It should be clear now that there are so many angles to the issues
> concerning the development and use of A(G)I that there needs to be clarity
> about the technologies and standards for such, the uses thereof in which
> fields and with which standards and/or guidelines/legislation,
> determination of types of human AI agent interactions and user agreements
> and advisories for such.
>
> The arms race started with Big Internet Companies eager to incorporate
> ChatGPT in online search has compounded the issues of trustworthiness,
> reliability and accountability for the use of AI.
>
> It wouldn't be a bad idea to create an overview mindmap with accompanying
> StratML plan for tackling the issues.
>
> Milton Ponson
> GSM: +297 747 8280
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
> Project Paradigm: Bringing the ICT tools for sustainable development to
> all stakeholders worldwide through collaborative research on applied
> mathematics, advanced modeling, software and standards development
>
>
> On Wednesday, February 15, 2023 at 12:29:15 AM AST, Paola Di Maio <
> paoladimaio10@gmail.com> wrote:
>
>
> Patrick
> thanks a lot  for sharing, we can keep these principles in mind and
> massage them into standards,(when we get there).
> I signed up for the NIST effort but did not have the time availability to
> take part beyond an initial feedback to a first draft. I am also concerned
> that, given no scarcity of just laws exist in all spheres that have no
> consequence, no effect, and that cannot be enforced without expensive
> lawyers, I cannot see myself
> putting my energy into creating more legislation.
> As @simon points out, with the example of the mechanical turn (thanks
> simon) humanity has been
> at deceit for a long time- there is too much deception including in and
> around the judicial system
>
>  I witness not only the blurring of the boundaries between what is AI and
> what is not, but also what in artificial vs natural on all fronts
> (materials, fools, medicines, truths). I also witness routinely the just
> principles of the law, and honest people in societies and governing bodies
> routinely misled, misrepresented and misused to contravene the principles
> underlying the laws.
> When it comes to enforcing the law, the boundary is precisely where
> miscarriage of justice occurs
> By interpretation of definition, omission of a fact, or acquisition of
> evedence using deceit,
> miscarriage of justice is routinely enforced. This is why the legal system
> is ultimately, just another
> false pretense to do wrong for those who can afford to.
>
> I see systemic deviation on all fronts, which lessens my confidence in
> humanity and its governing processes and organisms, and pushes me
> personally into transcendence. (seeking the truth where I can find it)
> I have no choice but to concentrate on the cultivation of natural
> intelligence that arises spontaneously and free will that follows, often
> going against convention.
>  I do not have resources to put effort into developing laws that are
> constantly manipulated and used against the legal principles by a society
> which is becoming increasingly confused about what is what, and why and how
> and has evolved its intelligence only in part. (probably constrained by
> common sense?)
>
> But I am glad to hear that the system may work for others tho.
> Keep pushing that stuff to us, so that when we come to make a model we can
> keep that in mind, and do not forget to include the references :-)
>
>
> On Mon, Feb 13, 2023 at 6:46 PM Patrick Besner <patrick@novosteer.com>
> wrote:
>
> Paola,
>
> I apologize for my lack of participation/contribution in the previous
> conversations, but I have been following the discussions closely.  As a
> knowledgeable researcher in the field of artificial intelligence, I share
> your concerns about the challenges of legislating AI. It is an honor to
> have participated and contributed to the Trustworthy and Responsible AI -
> AI Risk Management Framework workshops and Blueprint for an AI Bill of
> Rights workshops hosted by the National Institute of Standards and
> Technology. The NIST Blueprint for an AI Bill of Rights, created in 2020,
> represents a significant step forward in promoting the responsible use and
> development of artificial intelligence. The set of principles outlined in
> the blueprint aim to ensure that AI systems are used in a way that respects
> human rights, values, and dignity, and that the benefits of AI are widely
> shared. The participation and contribution in these workshops further
> emphasizes our commitment to responsible AI practices and to shaping a
> future where AI is used for the betterment of society.
>
> The NIST Blueprint for an AI Bill of Rights, which can be found at
> https://www.se.gov/ostp/ai-bill-of-rights/what-is-the-blueprint-for-an-ai-bill-of-rights/,
> is a commendable effort, however, the complex nature of AI and its rapidly
> evolving technology present significant challenges in creating effective
> and enforceable legislation.
>
> Labeling AI-generated content and making data sources mandatory to some
> extent aligns with accepted human conventions and could bring significant
> benefits. However, the reality of research ethics and scholarly publishing
> is far from perfect, and even human-generated content can be plagiarized or
> fabricated. This highlights the need for a comprehensive approach to
> regulating AI that takes into account the unique challenges posed by this
> technology.
>
> The blurring of boundaries between AI and human-generated content raises
> valid questions about what constitutes AI-generated content and what
> standards should apply. It is crucial to have clear definitions and
> guidelines to ensure that AI systems are held to appropriate standards and
> that the rights and values of individuals are protected.
>
> The development of effective legislation for AI is a complex and
> challenging task, but it is essential to continue exploring and addressing
> these issues to ensure the responsible use and development of AI. Further
> research and discussion are necessary to determine the best approach to
> regulating AI and balancing the benefits of technology with the need to
> protect human rights and values.
>
> ********************************************************************
> * Comprehensive recommendation for legislating AI.*
>
> Introduction:
>
> Artificial Intelligence (AI) is a rapidly advancing field with immense
> potential to transform many aspects of our lives. However, with this
> potential comes a need for clear and comprehensive legislation to ensure
> that AI is developed, used, and regulated in a manner that is ethical,
> responsible, and in the best interests of society.
>
> The aim of this paper is to provide a comprehensive recommendation for
> legislating AI. The paper will outline the key principles that should form
> the basis of such legislation, and provide a detailed description of what
> such legislation might look like in practice.
>
> Key Principles for AI Legislation:
>
> Transparency: AI systems should be transparent in their decision-making
> processes and operations. This includes providing clear explanations for
> how decisions are made and how data is used.
>
> Responsibility: AI systems should be designed and used in a manner that is
> accountable and responsible. This includes ensuring that AI systems do not
> cause harm or discriminate against certain groups, and that appropriate
> measures are in place to address such harm if it occurs.
>
> Privacy: AI systems should respect the privacy of individuals and be
> designed in a manner that protects personal data. This includes ensuring
> that data is collected and used in a manner that is compliant with privacy
> laws and regulations.
>
> Safety: AI systems should be designed and used in a manner that is safe
> and does not pose a risk to public safety. This includes ensuring that AI
> systems are tested and validated prior to deployment, and that appropriate
> measures are in place to mitigate any potential risks.
>
> Fairness: AI systems should be designed and used in a manner that is fair
> and unbiased. This includes ensuring that AI systems do not discriminate
> against certain groups, and that appropriate measures are in place to
> address any instances of discrimination.
>
> Legislation for AI:
>
> Definition of AI: The legislation should provide a clear definition of AI,
> including its various forms and applications. This will help to ensure that
> the legislation is comprehensive and applies to the full range of AI
> technologies and systems.
>
> Regulatory Authority: The legislation should establish a regulatory
> authority responsible for overseeing the development and deployment of AI
> systems. This authority should have the power to enforce the principles
> outlined in the legislation, and to impose penalties on organizations that
> violate these principles.
>
> Licensing and Certification: The legislation should require that
> organizations developing and deploying AI systems obtain a license or
> certification from the regulatory authority. This will ensure that AI
> systems are developed and used in a manner that is compliant with the
> principles outlined in the legislation.
>
> Data Management: The legislation should require organizations to manage
> data used in AI systems in a manner that is transparent, responsible, and
> in compliance with privacy laws and regulations. This will help to ensure
> that data is used ethically and in the best interests of society.
>
> Testing and Validation: The legislation should require organizations to
> test and validate AI systems prior to deployment, and to demonstrate that
> these systems are safe and do not pose a risk to public safety.
>
> Liability: The legislation should establish a clear framework for
> determining liability in the event that an AI system causes harm or
> violates the principles outlined in the legislation. This will help to
> ensure that organizations are held accountable for the systems they develop
> and use.
>
> By providing clear definitions, establishing a regulatory authority, and
> requiring licensing and certification, organizations can be held
> accountable for their use of AI and ensure that the technology is used in a
> responsible and ethical manner. The principles of transparency,
> responsibility, privacy, safety, and fairness provide a strong foundation
> for AI legislation, and should be central to any efforts to regulate this
> rapidly evolving field.
>
> I can not emphasize enough the importance of international cooperation in
> the regulation of AI. With AI being a rapidly advancing field with global
> implications, it is important to have a coordinated approach to its
> regulation. This could involve the creation of international agreements or
> the establishment of international regulatory bodies to ensure that AI is
> developed, used, and regulated in a manner that is consistent across
> borders.
>
> In conclusion, it is important to continue exploring and addressing the
> challenges of legislating AI, and to engage in a constructive dialogue
> about the future of AI and its regulation. The development of effective
> legislation for AI is essential to ensure that the technology is used in a
> manner that is in the best interests of society and protects the rights and
> values of individuals.
>
> Best regards,
> Patrick Besner
>
>
>
>
>
> On Mon, Feb 13, 2023 at 12:19 AM Paola Di Maio <paola.dimaio@gmail.com>
> wrote:
>
> When it comes to legislating AI, I still have not made my mind up as to
> exactly what would such legislation look like,
>
> I agree that labelling AI generated content/ would be feasible and highly
> beneficial
> I would also make the data sources mandatory (to some extent)
> In the same way that sources need to be cited in scholarly papers
> (artificial intelligence should follow the accepted human conventions )
>
> Problem is
>
> a) even human plagiarize sources and fabricate data and get away with it.
> Even if on the face of it research publication ethics is promoted, the
> reality is that the very same journals  that declare to adhere to research
> ethics, in reality, blatantly plagiarized and nobody can do anything about
> it
> (because of flaws in scholarly publishing. because lawyers are  expensive
> and because publishing houses are part of powerful cartels etc etc)
>
> Can we demand/expect that AI adheres to better standards than humans?
> b) there is an increasingly blurring of the boundaries between what is AI,
> For example, how data is actually machine generated or processed in the
> first place
>
> On Sat, Feb 11, 2023 at 11:49 PM David Booth <david@dbooth.org> wrote:
>
> On 2/11/23 05:10, Dave Reynolds wrote:
> > On 10/02/2023 18:01, David Booth wrote:
> >> I personally think we need legislation against AI catfishing, i.e., AI
> >> *pretending* to be human.
> >>
> >>   - AI-generated content should be clearly labeled as such.
> >>
> >>   - Bots should be clearly labeled as such.
> >
> > A worthy aim though I'm skeptical any such legislation could be usefully
> > enforced.
>
> Certainly not 100%, but I think they could still help reduce the
> problem, particularly if they're targeted at the civil level, which has
> a much lower burden-of-proof threshold than the criminal level.  Laws
> regarding fraud, false advertising and accurate product labeling all
> come to mind as examples of laws that help, even if they're not 100%
> usefully enforced.
>
> Best wishes,
> David Booth
>
>
>
> --
> [image: Logo] <https://www.novosteer.com/>
>
> [image: facebook icon] <https://www.facebook.com/novosteer/> [image:
> linkedin icon] <https://www.linkedin.com/company/novosteer/mycompany/> [image:
> instagram icon] <https://www.instagram.com/novosteer_technologies/>
> Patrick Besner
> President, Chief Executive Officer (CEO)
> Novosteer Technologies Corp
>
> M 1-613-363-1498 | P 1-888-983-8333
> E patrick@novosteer.com
>
> 191 Lombard Ave, 4th Floor
> Winnipeg, MB, Canada, R3B 0X1
> www.novosteer.com
>
>

Received on Thursday, 16 February 2023 18:00:07 UTC