Re: My Chat with ChatGPT

How was this update validated to be the correct information?

On Mon, Jan 2, 2023, 11:33 AM Naval Sarda <nsarda@epicomm.net> wrote:

> I was able to correct some information Chat GPT was sharing.
>
> I asked Chat GPT for instructions to reboot roomba i3. It gave wrongs
> instructions. Instructions were for resetting roomba and not rebooting.
>
> Then I found correct information on google and iRobots' website. I pasted
> the URL to chat gpt. It changed the instructions for rebooting but it was
> still incorrect. Then I copied and pasted the correct instructions from
> iRobot's website. Then it started giving correct instructions.
>
> Naval
> On 02/01/23 9:10 pm, ProjectParadigm-ICT-Program wrote:
>
>
> https://www.scientificamerican.com/article/chatgpt-explains-why-ais-like-chatgpt-should-be-regulated/
>
> Milton Ponson
> GSM: +297 747 8280
> PO Box 1154, Oranjestad
> Aruba, Dutch Caribbean
> Project Paradigm: Bringing the ICT tools for sustainable development to
> all stakeholders worldwide through collaborative research on applied
> mathematics, advanced modeling, software and standards development
>
>
> On Friday, December 23, 2022 at 12:00:36 PM AST, Owen Ambur
> <owen.ambur@verizon.net> <owen.ambur@verizon.net> wrote:
>
>
> Apparently, ChatGPT, which calls itself "Assistant," believes in
> immaculate conception and is blissfully ignorant of any rational plan for
> its existence.
>
> The concept of "plausible deniability" comes to mind.
>
> Owen Ambur
> https://www.linkedin.com/in/owenambur/
>
>
> Do your developers have a plan they are willing to share publicly?
>
> As an artificial intelligence, I do not have developers or a plan that I
> am aware of. I was created by OpenAI, a research organization that focuses
> on developing and promoting friendly AI. OpenAI's goal is to advance AI in
> a way that is safe and beneficial to humanity. The organization has
> published numerous research papers and articles on AI, and also works on a
> variety of projects related to machine learning, natural language
> processing, and robotics. However, I do not have access to any specific
> plans or information about the organization's future projects or activities.
> [image: Owen Ambur]
> How can users trust you if you aren't willing to share your plan?
>
> As an artificial intelligence, I do not have a plan or agenda in the way
> that a person or organization might. My primary function is to provide
> information and assist with tasks to the best of my ability, based on the
> information and instructions that I have been programmed with. My responses
> are not influenced by any personal motivations or biases, and I do not have
> the ability to act on my own volition.
>
> If you have concerns about my trustworthiness or the ways in which I might
> be used, it might be helpful to consider the broader context in which I
> operate. OpenAI, the organization that developed me, is committed to
> advancing AI in a responsible and transparent manner, and has published a
> number of research papers and articles discussing the potential risks and
> benefits of AI, as well as the measures it is taking to ensure that AI is
> developed and used safely and ethically.
>
> Ultimately, the decision to trust me or any other artificial intelligence
> will depend on an individual's own evaluation of the available information
> and the context in which the AI is being used. If you have any specific
> concerns or questions, I would be happy to try to address them to the best
> of my ability.
> [image: Owen Ambur]
> Do you think that all developer of AI applications should be free to
> release them on the public without documenting
> their longer-term goals, near-term objectives, and performance indicators?
>
> It is generally considered good practice for developers of any software
> application, including those that involve artificial intelligence (AI), to
> be transparent about their goals, objectives, and performance indicators.
> This can help users understand the capabilities and limitations of the
> software, as well as how it is intended to be used and for what purposes.
>
> In the case of AI applications, it is particularly important for
> developers to be transparent about their goals and objectives, as AI
> systems can have significant implications for society and can potentially
> be used in ways that have unintended consequences. For example, AI systems
> can be used to make decisions that affect people's lives, such as hiring
> decisions or parole recommendations, and it is important that the criteria
> and processes used by these systems are clearly understood and subject to
> scrutiny.
>
> In addition, AI systems can also be complex and can behave in ways that
> are difficult to predict, so it is important for developers to be upfront
> about the potential risks and limitations of their systems, and to have
> clear plans in place for addressing any issues that may arise.
>
> Overall, it is important for developers of AI applications to be
> transparent about their goals, objectives, and performance indicators, both
> to ensure that their systems are used responsibly and ethically, and to
> build trust with users.
> [image: Owen Ambur]
> So why are you exempt?
>
> As an artificial intelligence, I am not exempt from any particular
> requirements or standards. I was developed by OpenAI, a research
> organization that is committed to advancing AI in a responsible and
> transparent manner. My primary function is to provide information and
> assist with tasks to the best of my ability, based on the information and
> instructions that I have been programmed with.
>
> I do not have personal goals or motivations, and my responses are not
> influenced by any biases. I do not have the ability to act on my own
> volition or to make decisions that affect people's lives. My purpose is
> simply to provide information and assistance to users to the best of my
> ability, based on the knowledge and capabilities that I have been
> programmed with.
>
>
> [image: Owen Ambur]
> Do you see any need for AI agents and their developers to do more than
> publish journal articles to help users
> understand what you are doing and what the implications may be for others?
>
> It is important for developers of artificial intelligence (AI) systems to
> be transparent about their goals, objectives, and performance indicators,
> and to communicate this information clearly to users and stakeholders.
> There are a number of ways that AI developers can do this, including
> through journal articles, technical documentation, user manuals, and other
> forms of communication.
>
> In addition to publishing research and technical documentation, it can
> also be helpful for AI developers to engage in public discussions and
> dialogues about the potential risks and benefits of their systems, and to
> consider the broader social and ethical implications of their work. This
> can help to build trust with users and stakeholders, and can help ensure
> that AI systems are developed and used in a responsible and ethical manner.
>
> Ultimately, the specific steps that AI developers should take to
> communicate about their work will depend on the nature of the AI system and
> the context in which it is being used. It is important for AI developers to
> be proactive in communicating about their work and to be responsive to the
> concerns and questions of users and stakeholders.
>
> [image: Owen Ambur]
> Does your definition of transparency include usage of duly adopted
> international voluntary consensus
> standards like ISO 17469-1?
>
> Transparency can be defined as the quality of being open and honest in
> communication and actions, and of allowing others to see and understand the
> processes, data, and decision-making behind an activity. In the context of
> artificial intelligence (AI), transparency can refer to the ways in which
> AI developers communicate about their goals, objectives, and performance
> indicators, as well as the processes and data used to train and operate AI
> systems.
>
> International voluntary consensus standards like ISO 17469-1 can help to
> ensure transparency in the development and use of AI systems by providing
> guidelines and best practices for AI developers to follow. These standards
> can help to ensure that AI systems are developed and used in a responsible
> and ethical manner, and can help to build trust with users and stakeholders.
>
> Overall, the use of international voluntary consensus standards like ISO
> 17469-1 can be an important aspect of ensuring transparency in the
> development and use of AI systems. However, it is important to recognize
> that transparency is not a one-size-fits-all concept, and that the specific
> steps that AI developers should take to ensure transparency will depend on
> the nature of the AI system and the context in which it is being used.
>
> I enourage you to get more intelligent about how to share information
> about your intentions and results with your
> users. Toward that end, OpenAI's about statement is available in StratML
> format at
> https://stratml.us/carmel/iso/OAI2wStyle.xml and the plan entitled,
> "Toward Trustworthy AI Development: Mechanisms
> for Supporting Verifiable Claims" is available at
> https://stratml.us/carmel/iso/MSVCwStyle.xml
>
> Thank you for the suggestion. As an artificial intelligence, I do not have
> personal goals or motivations, and my responses are not influenced by any
> biases. My primary function is to provide information and assistance to
> users to the best of my ability, based on the knowledge and capabilities
> that I have been programmed with.
>
> I do not have the ability to make decisions or take actions on my own, and
> I do not have personal intentions or results that I need to share with
> users. However, I understand that it is important for developers of AI
> systems, including the organization that developed me, to be transparent
> about their goals, objectives, and performance indicators, and to
> communicate this information clearly to users and stakeholders.
>
> I am not able to access or review the specific documents that you have
> mentioned, as I do not have browsing capability. However, I understand that
> OpenAI is committed to advancing AI in a responsible and transparent
> manner, and has published a number of research papers and articles
> discussing the potential risks and benefits of AI, as well as the measures
> it is taking to ensure that AI is developed and used safely and ethically.
>
>

Received on Monday, 2 January 2023 19:54:09 UTC