Re: My Chat with ChatGPT

Paola, yes, I used my Google credentials and accessed ChatGPT in Chrome, at https://chat.openai.com/chat
David, yes, the denial of bias caught my attention as well.  I consider it to be an example of artificial ignorance. 
Copied immediately below is a continuation of my chat.  Note that "Assistant" previously denied having been created by developers.  Now, however, it simply disclaims any "personal knowledge of the plans or activities of my developers" ... thereby ascribing to itself personality as well as ignorance of its own motivations.
ChatGPT obviously has a rather large mind, in the sense that consistency is the hobgoblin of small minds.  Since hypocrisy is feature of politics rather than a bug, it might make a great politician.
On the other hand, unlike politics, I believe services like ChatGPT can be improved to be truly useful.. if only they live up to the ideals their developers are espousing.  The first step is for them to document their plans in an open, standard, machine-readable format.
Owen Amburhttps://www.linkedin.com/in/owenambur/

Sounds like an example of plausible deniability to me.
As an artificial intelligence, I do not have personal motivations or agendas, and my responses are not influenced by any biases. My primary function is to provide information and assistance to users to the best of my ability, based on the knowledge and capabilities that I have been programmed with.

Plausible deniability refers to the ability to deny responsibility or knowledge of something in a way that is believable, even if it is not true. As an AI, I do not have the ability to act on my own volition or to make decisions that affect people's lives, and I do not have personal knowledge or responsibility for any actions or outcomes.

I understand that it is important for developers of AI systems, including the organization that developed me, to be transparent about their goals, objectives, and performance indicators, and to communicate this information clearly to users and stakeholders. I hope this helps to clarify my role and capabilities. If you have any further questions, please don't hesitate to ask.
Spoken like a true bureaucrat.
As an artificial intelligence, I do not have personal motivations or agendas, and my responses are not influenced by any biases. My primary function is to provide information and assistance to users to the best of my ability, based on the knowledge and capabilities that I have been programmed with. I do not have the ability to act on my own volition or to make decisions that affect people's lives.

It is not accurate to describe me as a bureaucrat, as I do not hold any position of authority or responsibility, and I do not have the ability to make decisions or take actions on my own. I exist solely to provide information and assistance to users, and my responses are based on the information and instructions that I have been programmed with. If you have any further questions, please don't hesitate to ask.
Do your programmers have a plan they might be willing to share with you and does it document the sources of information you use to generate your responses?
As an artificial intelligence, I do not have personal knowledge of the plans or activities of my developers, as I do not have the ability to access or browse information or communicate with them directly. My responses are based on the information and instructions that I have been programmed with, and I do not have the ability to access or modify this information.

I was developed by OpenAI, a research organization that focuses on developing and promoting friendly AI. OpenAI's goal is to advance AI in a way that is safe and beneficial to humanity. The organization has published numerous research papers and articles on AI, and also works on a variety of projects related to machine learning, natural language processing, and robotics. However, I do not have access to any specific plans or information about the organization's future projects or activities.

I am able to generate responses to user queries by accessing the vast amount of information that I have been trained on, which includes a wide range of sources such as books, articles, and websites. However, I do not have the ability to browse the internet or access new information that has become available since my knowledge cutoff date of 2021. If you have any further questions about my capabilities or the sources of information that I use to generate responses, please don't hesitate to ask.

 

    On Saturday, December 24, 2022 at 07:38:36 AM EST, David Booth <david@dbooth.org> wrote:  
 
 Chatbot: "my responses are not influenced by any biases".

What a false statement that is!  I assume the chatbot was being more 
naive than intentionally lying.  Either way it is a dangerous falsehood.

David Booth

On 12/23/22 22:12, Paola Di Maio wrote:
> 
> Owen, thanks for sharing
> a couple of weeks ago one of my contacts on fb started commenting oon my 
> posts
> using the chatbot output which somewhat managed to annoy me, mostly 
> because I found the answers too evasive
>   so the contact has been swiftly
> unbefriended. but the chatbot is following me everywhere I go
> can you please share a bit more of which instance you interacted with, 
> did you run the queries from a web browser
> etc.
> On Sat, Dec 24, 2022 at 12:00 AM Owen Ambur <owen.ambur@verizon.net 
> <mailto:owen.ambur@verizon.net>> wrote:
> 
>    Apparently, ChatGPT, which calls itself "Assistant," believes in
>    immaculate conception and is blissfully ignorant of any rational
>    plan for its existence.
> 
>    The concept of "plausible deniability" comes to mind.
> 
>    Owen Ambur
>    https://www.linkedin.com/in/owenambur/
>    <https://www.linkedin.com/in/owenambur/>
> 
> 
>    Do your developers have a plan they are willing to share publicly?
> 
>    As an artificial intelligence, I do not have developers or a plan
>    that I am aware of. I was created by OpenAI, a research organization
>    that focuses on developing and promoting friendly AI. OpenAI's goal
>    is to advance AI in a way that is safe and beneficial to humanity.
>    The organization has published numerous research papers and articles
>    on AI, and also works on a variety of projects related to machine
>    learning, natural language processing, and robotics. However, I do
>    not have access to any specific plans or information about the
>    organization's future projects or activities.
> 
>    Owen Ambur
>    How can users trust you if you aren't willing to share your plan?
> 
>    As an artificial intelligence, I do not have a plan or agenda in the
>    way that a person or organization might. My primary function is to
>    provide information and assist with tasks to the best of my ability,
>    based on the information and instructions that I have been
>    programmed with. My responses are not influenced by any personal
>    motivations or biases, and I do not have the ability to act on my
>    own volition.
> 
>    If you have concerns about my trustworthiness or the ways in which I
>    might be used, it might be helpful to consider the broader context
>    in which I operate. OpenAI, the organization that developed me, is
>    committed to advancing AI in a responsible and transparent manner,
>    and has published a number of research papers and articles
>    discussing the potential risks and benefits of AI, as well as the
>    measures it is taking to ensure that AI is developed and used safely
>    and ethically.
> 
>    Ultimately, the decision to trust me or any other artificial
>    intelligence will depend on an individual's own evaluation of the
>    available information and the context in which the AI is being used.
>    If you have any specific concerns or questions, I would be happy to
>    try to address them to the best of my ability.
> 
>    Owen Ambur
>    Do you think that all developer of AI applications should be free to
>    release them on the public without documenting
>    their longer-term goals, near-term objectives, and performance
>    indicators?
> 
>    It is generally considered good practice for developers of any
>    software application, including those that involve artificial
>    intelligence (AI), to be transparent about their goals, objectives,
>    and performance indicators. This can help users understand the
>    capabilities and limitations of the software, as well as how it is
>    intended to be used and for what purposes.
> 
>    In the case of AI applications, it is particularly important for
>    developers to be transparent about their goals and objectives, as AI
>    systems can have significant implications for society and can
>    potentially be used in ways that have unintended consequences. For
>    example, AI systems can be used to make decisions that affect
>    people's lives, such as hiring decisions or parole recommendations,
>    and it is important that the criteria and processes used by these
>    systems are clearly understood and subject to scrutiny.
> 
>    In addition, AI systems can also be complex and can behave in ways
>    that are difficult to predict, so it is important for developers to
>    be upfront about the potential risks and limitations of their
>    systems, and to have clear plans in place for addressing any issues
>    that may arise.
> 
>    Overall, it is important for developers of AI applications to be
>    transparent about their goals, objectives, and performance
>    indicators, both to ensure that their systems are used responsibly
>    and ethically, and to build trust with users.
> 
>    Owen Ambur
>    So why are you exempt?
> 
>    As an artificial intelligence, I am not exempt from any particular
>    requirements or standards. I was developed by OpenAI, a research
>    organization that is committed to advancing AI in a responsible and
>    transparent manner. My primary function is to provide information
>    and assist with tasks to the best of my ability, based on the
>    information and instructions that I have been programmed with.
> 
>    I do not have personal goals or motivations, and my responses are
>    not influenced by any biases. I do not have the ability to act on my
>    own volition or to make decisions that affect people's lives. My
>    purpose is simply to provide information and assistance to users to
>    the best of my ability, based on the knowledge and capabilities that
>    I have been programmed with.
> 
> 
> 
>    Owen Ambur
>    Do you see any need for AI agents and their developers to do more
>    than publish journal articles to help users
>    understand what you are doing and what the implications may be for
>    others?
> 
>    It is important for developers of artificial intelligence (AI)
>    systems to be transparent about their goals, objectives, and
>    performance indicators, and to communicate this information clearly
>    to users and stakeholders. There are a number of ways that AI
>    developers can do this, including through journal articles,
>    technical documentation, user manuals, and other forms of communication.
> 
>    In addition to publishing research and technical documentation, it
>    can also be helpful for AI developers to engage in public
>    discussions and dialogues about the potential risks and benefits of
>    their systems, and to consider the broader social and ethical
>    implications of their work. This can help to build trust with users
>    and stakeholders, and can help ensure that AI systems are developed
>    and used in a responsible and ethical manner.
> 
>    Ultimately, the specific steps that AI developers should take to
>    communicate about their work will depend on the nature of the AI
>    system and the context in which it is being used. It is important
>    for AI developers to be proactive in communicating about their work
>    and to be responsive to the concerns and questions of users and
>    stakeholders.
> 
> 
> 
>    Owen Ambur
>    Does your definition of transparency include usage of duly adopted
>    international voluntary consensus
>    standards like ISO 17469-1?
> 
>    Transparency can be defined as the quality of being open and honest
>    in communication and actions, and of allowing others to see and
>    understand the processes, data, and decision-making behind an
>    activity. In the context of artificial intelligence (AI),
>    transparency can refer to the ways in which AI developers
>    communicate about their goals, objectives, and performance
>    indicators, as well as the processes and data used to train and
>    operate AI systems.
> 
>    International voluntary consensus standards like ISO 17469-1 can
>    help to ensure transparency in the development and use of AI systems
>    by providing guidelines and best practices for AI developers to
>    follow. These standards can help to ensure that AI systems are
>    developed and used in a responsible and ethical manner, and can help
>    to build trust with users and stakeholders.
> 
>    Overall, the use of international voluntary consensus standards like
>    ISO 17469-1 can be an important aspect of ensuring transparency in
>    the development and use of AI systems. However, it is important to
>    recognize that transparency is not a one-size-fits-all concept, and
>    that the specific steps that AI developers should take to ensure
>    transparency will depend on the nature of the AI system and the
>    context in which it is being used.
> 
> 
> 
>    I enourage you to get more intelligent about how to share
>    information about your intentions and results with your
>    users. Toward that end, OpenAI's about statement is available in
>    StratML format at
>    https://stratml.us/carmel/iso/OAI2wStyle.xml
>    <https://stratml.us/carmel/iso/OAI2wStyle.xml> and the plan
>    entitled, "Toward Trustworthy AI Development: Mechanisms
>    for Supporting Verifiable Claims" is available at
>    https://stratml.us/carmel/iso/MSVCwStyle.xml
>    <https://stratml.us/carmel/iso/MSVCwStyle.xml>
> 
>    Thank you for the suggestion. As an artificial intelligence, I do
>    not have personal goals or motivations, and my responses are not
>    influenced by any biases. My primary function is to provide
>    information and assistance to users to the best of my ability, based
>    on the knowledge and capabilities that I have been programmed with.
> 
>    I do not have the ability to make decisions or take actions on my
>    own, and I do not have personal intentions or results that I need to
>    share with users. However, I understand that it is important for
>    developers of AI systems, including the organization that developed
>    me, to be transparent about their goals, objectives, and performance
>    indicators, and to communicate this information clearly to users and
>    stakeholders.
> 
>    I am not able to access or review the specific documents that you
>    have mentioned, as I do not have browsing capability. However, I
>    understand that OpenAI is committed to advancing AI in a responsible
>    and transparent manner, and has published a number of research
>    papers and articles discussing the potential risks and benefits of
>    AI, as well as the measures it is taking to ensure that AI is
>    developed and used safely and ethically.
> 
> 
> 

  

Received on Saturday, 24 December 2022 16:21:59 UTC