Re: AI catfishing [was Re: ChatGPT and ontologies]

Patrick
thanks a lot  for sharing, we can keep these principles in mind and massage
them into standards,(when we get there).
I signed up for the NIST effort but did not have the time availability to
take part beyond an initial feedback to a first draft. I am also concerned
that, given no scarcity of just laws exist in all spheres that have no
consequence, no effect, and that cannot be enforced without expensive
lawyers, I cannot see myself
putting my energy into creating more legislation.
As @simon points out, with the example of the mechanical turn (thanks
simon) humanity has been
at deceit for a long time- there is too much deception including in and
around the judicial system

 I witness not only the blurring of the boundaries between what is AI and
what is not, but also what in artificial vs natural on all fronts
(materials, fools, medicines, truths). I also witness routinely the just
principles of the law, and honest people in societies and governing bodies
routinely misled, misrepresented and misused to contravene the principles
underlying the laws.
When it comes to enforcing the law, the boundary is precisely where
miscarriage of justice occurs
By interpretation of definition, omission of a fact, or acquisition of
evedence using deceit,
miscarriage of justice is routinely enforced. This is why the legal system
is ultimately, just another
false pretense to do wrong for those who can afford to.

I see systemic deviation on all fronts, which lessens my confidence in
humanity and its governing processes and organisms, and pushes me
personally into transcendence. (seeking the truth where I can find it)
I have no choice but to concentrate on the cultivation of natural
intelligence that arises spontaneously and free will that follows, often
going against convention.
 I do not have resources to put effort into developing laws that are
constantly manipulated and used against the legal principles by a society
which is becoming increasingly confused about what is what, and why and how
and has evolved its intelligence only in part. (probably constrained by
common sense?)

But I am glad to hear that the system may work for others tho.
Keep pushing that stuff to us, so that when we come to make a model we can
keep that in mind, and do not forget to include the references :-)


On Mon, Feb 13, 2023 at 6:46 PM Patrick Besner <patrick@novosteer.com>
wrote:

> Paola,
>
> I apologize for my lack of participation/contribution in the previous
> conversations, but I have been following the discussions closely.  As a
> knowledgeable researcher in the field of artificial intelligence, I share
> your concerns about the challenges of legislating AI. It is an honor to
> have participated and contributed to the Trustworthy and Responsible AI -
> AI Risk Management Framework workshops and Blueprint for an AI Bill of
> Rights workshops hosted by the National Institute of Standards and
> Technology. The NIST Blueprint for an AI Bill of Rights, created in 2020,
> represents a significant step forward in promoting the responsible use and
> development of artificial intelligence. The set of principles outlined in
> the blueprint aim to ensure that AI systems are used in a way that respects
> human rights, values, and dignity, and that the benefits of AI are widely
> shared. The participation and contribution in these workshops further
> emphasizes our commitment to responsible AI practices and to shaping a
> future where AI is used for the betterment of society.
>
> The NIST Blueprint for an AI Bill of Rights, which can be found at
> https://www.se.gov/ostp/ai-bill-of-rights/what-is-the-blueprint-for-an-ai-bill-of-rights/,
> is a commendable effort, however, the complex nature of AI and its rapidly
> evolving technology present significant challenges in creating effective
> and enforceable legislation.
>
> Labeling AI-generated content and making data sources mandatory to some
> extent aligns with accepted human conventions and could bring significant
> benefits. However, the reality of research ethics and scholarly publishing
> is far from perfect, and even human-generated content can be plagiarized or
> fabricated. This highlights the need for a comprehensive approach to
> regulating AI that takes into account the unique challenges posed by this
> technology.
>
> The blurring of boundaries between AI and human-generated content raises
> valid questions about what constitutes AI-generated content and what
> standards should apply. It is crucial to have clear definitions and
> guidelines to ensure that AI systems are held to appropriate standards and
> that the rights and values of individuals are protected.
>
> The development of effective legislation for AI is a complex and
> challenging task, but it is essential to continue exploring and addressing
> these issues to ensure the responsible use and development of AI. Further
> research and discussion are necessary to determine the best approach to
> regulating AI and balancing the benefits of technology with the need to
> protect human rights and values.
>
> ********************************************************************
> * Comprehensive recommendation for legislating AI.*
>
> Introduction:
>
> Artificial Intelligence (AI) is a rapidly advancing field with immense
> potential to transform many aspects of our lives. However, with this
> potential comes a need for clear and comprehensive legislation to ensure
> that AI is developed, used, and regulated in a manner that is ethical,
> responsible, and in the best interests of society.
>
> The aim of this paper is to provide a comprehensive recommendation for
> legislating AI. The paper will outline the key principles that should form
> the basis of such legislation, and provide a detailed description of what
> such legislation might look like in practice.
>
> Key Principles for AI Legislation:
>
> Transparency: AI systems should be transparent in their decision-making
> processes and operations. This includes providing clear explanations for
> how decisions are made and how data is used.
>
> Responsibility: AI systems should be designed and used in a manner that is
> accountable and responsible. This includes ensuring that AI systems do not
> cause harm or discriminate against certain groups, and that appropriate
> measures are in place to address such harm if it occurs.
>
> Privacy: AI systems should respect the privacy of individuals and be
> designed in a manner that protects personal data. This includes ensuring
> that data is collected and used in a manner that is compliant with privacy
> laws and regulations.
>
> Safety: AI systems should be designed and used in a manner that is safe
> and does not pose a risk to public safety. This includes ensuring that AI
> systems are tested and validated prior to deployment, and that appropriate
> measures are in place to mitigate any potential risks.
>
> Fairness: AI systems should be designed and used in a manner that is fair
> and unbiased. This includes ensuring that AI systems do not discriminate
> against certain groups, and that appropriate measures are in place to
> address any instances of discrimination.
>
> Legislation for AI:
>
> Definition of AI: The legislation should provide a clear definition of AI,
> including its various forms and applications. This will help to ensure that
> the legislation is comprehensive and applies to the full range of AI
> technologies and systems.
>
> Regulatory Authority: The legislation should establish a regulatory
> authority responsible for overseeing the development and deployment of AI
> systems. This authority should have the power to enforce the principles
> outlined in the legislation, and to impose penalties on organizations that
> violate these principles.
>
> Licensing and Certification: The legislation should require that
> organizations developing and deploying AI systems obtain a license or
> certification from the regulatory authority. This will ensure that AI
> systems are developed and used in a manner that is compliant with the
> principles outlined in the legislation.
>
> Data Management: The legislation should require organizations to manage
> data used in AI systems in a manner that is transparent, responsible, and
> in compliance with privacy laws and regulations. This will help to ensure
> that data is used ethically and in the best interests of society.
>
> Testing and Validation: The legislation should require organizations to
> test and validate AI systems prior to deployment, and to demonstrate that
> these systems are safe and do not pose a risk to public safety.
>
> Liability: The legislation should establish a clear framework for
> determining liability in the event that an AI system causes harm or
> violates the principles outlined in the legislation. This will help to
> ensure that organizations are held accountable for the systems they develop
> and use.
>
> By providing clear definitions, establishing a regulatory authority, and
> requiring licensing and certification, organizations can be held
> accountable for their use of AI and ensure that the technology is used in a
> responsible and ethical manner. The principles of transparency,
> responsibility, privacy, safety, and fairness provide a strong foundation
> for AI legislation, and should be central to any efforts to regulate this
> rapidly evolving field.
>
> I can not emphasize enough the importance of international cooperation in
> the regulation of AI. With AI being a rapidly advancing field with global
> implications, it is important to have a coordinated approach to its
> regulation. This could involve the creation of international agreements or
> the establishment of international regulatory bodies to ensure that AI is
> developed, used, and regulated in a manner that is consistent across
> borders.
>
> In conclusion, it is important to continue exploring and addressing the
> challenges of legislating AI, and to engage in a constructive dialogue
> about the future of AI and its regulation. The development of effective
> legislation for AI is essential to ensure that the technology is used in a
> manner that is in the best interests of society and protects the rights and
> values of individuals.
>
> Best regards,
> Patrick Besner
>
>
>
>
>
> On Mon, Feb 13, 2023 at 12:19 AM Paola Di Maio <paola.dimaio@gmail.com>
> wrote:
>
>> When it comes to legislating AI, I still have not made my mind up as to
>> exactly what would such legislation look like,
>>
>> I agree that labelling AI generated content/ would be feasible and highly
>> beneficial
>> I would also make the data sources mandatory (to some extent)
>> In the same way that sources need to be cited in scholarly papers
>> (artificial intelligence should follow the accepted human conventions )
>>
>> Problem is
>>
>> a) even human plagiarize sources and fabricate data and get away with it.
>> Even if on the face of it research publication ethics is promoted, the
>> reality is that the very same journals  that declare to adhere to research
>> ethics, in reality, blatantly plagiarized and nobody can do anything about
>> it
>> (because of flaws in scholarly publishing. because lawyers are  expensive
>> and because publishing houses are part of powerful cartels etc etc)
>>
>> Can we demand/expect that AI adheres to better standards than humans?
>> b) there is an increasingly blurring of the boundaries between what is
>> AI, For example, how data is actually machine generated or processed in the
>> first place
>>
>> On Sat, Feb 11, 2023 at 11:49 PM David Booth <david@dbooth.org> wrote:
>>
>>> On 2/11/23 05:10, Dave Reynolds wrote:
>>> > On 10/02/2023 18:01, David Booth wrote:
>>> >> I personally think we need legislation against AI catfishing, i.e.,
>>> AI
>>> >> *pretending* to be human.
>>> >>
>>> >>   - AI-generated content should be clearly labeled as such.
>>> >>
>>> >>   - Bots should be clearly labeled as such.
>>> >
>>> > A worthy aim though I'm skeptical any such legislation could be
>>> usefully
>>> > enforced.
>>>
>>> Certainly not 100%, but I think they could still help reduce the
>>> problem, particularly if they're targeted at the civil level, which has
>>> a much lower burden-of-proof threshold than the criminal level.  Laws
>>> regarding fraud, false advertising and accurate product labeling all
>>> come to mind as examples of laws that help, even if they're not 100%
>>> usefully enforced.
>>>
>>> Best wishes,
>>> David Booth
>>>
>>>
>
> --
> [image: Logo] <https://www.novosteer.com/>
>
> [image: facebook icon] <https://www.facebook.com/novosteer/> [image:
> linkedin icon] <https://www.linkedin.com/company/novosteer/mycompany/> [image:
> instagram icon] <https://www.instagram.com/novosteer_technologies/>
> Patrick Besner
> President, Chief Executive Officer (CEO)
> Novosteer Technologies Corp
>
> M 1-613-363-1498 | P 1-888-983-8333
> E patrick@novosteer.com
>
> 191 Lombard Ave, 4th Floor
> Winnipeg, MB, Canada, R3B 0X1
> www.novosteer.com
>

Received on Wednesday, 15 February 2023 04:28:59 UTC