Re: My Chat with ChatGPT

Yes sure
I engage as appropriate

If I am hired to evaluate, I do it systematically and within a timeframe

if evaluateng in my own time, it gets done as when I get the chance to try
it out
as I use it, for whatever reason, I ll evaluate and make a note of outcomes

I way too engaged in too many things

On Tue, Jan 3, 2023 at 1:14 PM Mike Bergman <mike@mkbergman.com> wrote:

> Hi Paola,
>
> But isn't this the exemplar for why we at times have a difficulty moving
> forward? Do you want to know about his technology or not? If you do,
> engage. Here's another perspective:
>
> What do we know about the cockroach? We can only observe it; interact with
> it; probe it. If we squash it we cease learning more.
>
> We can probe and ask questions of Chat GPT (I have) and learn about what
> it is willing to divulge about its construction and what it is not. (Some
> is programmatically restricted.) It is nothing different than a virtual
> cockroach. I truly suspect its developers are continually surprised about
> what it can do and not do, and in some ways its operation remains a mystery
> to them.
>
> So, once the evaluation is done (or it becomes tiresome), we decide the
> value of the artifact as is and use it or not going forward.
>
> Best, Mike
> On 1/2/2023 10:50 PM, Paola Di Maio wrote:
>
> Thank you Mike
>
> I have signed up and asked a couple of questions, but I would have to
> check the answers, do not have much time to play much with it now - ok,
> busy playing with other things-
>
> I am sure your evaluation fits to some extent, but I would need more
> in-depth evaluation system-wide to understand the capability better, m,y
> feel that the architecture is heterogeneous (works in some ways in some
> parts and other ways in other parts)
>
> As to whether it is intelligent or not, its a matter of definition
> We cannot compare human intelligence to machine intelligence yet, however
> a machine is intelligent some if it can do information processing
>
> Not all intelligence can be automated, but some intelligent processes can,
> the whole point of intelligent systems
>
> So much to read on the subject, so little time to talk about it
> OLD TOPIC (paywalled)  Intelligence as an information-processing concept
> Earl Hunt  First published: November 1980
> https://doi.org/10.1111/j.2044-8295.1980.tb01760.x
> Citations: 331
>
> and  Open Access preprint, more recent
> Intelligence as information processing: brains, swarms, and computers
> Carlos Gershenson
> <https://arxiv.org/search/cs?searchtype=author&query=Gershenson%2C+C>
>
> There is no agreed definit
>
> https://arxiv.org/abs/2108.05349
>
>
>
> On Tue, Jan 3, 2023 at 12:26 PM Mike Bergman <mike@mkbergman.com> wrote:
>
>> Hi Paola,
>>
>> I think this is state-of-the-art for textual Q & A systems. To learn
>> further, it is likely essential to sign up for a free account
>> <https://chat.openai.com/auth/login> (for now) and begin to play. To
>> narrow or expand queries, think of your questions to the system as Venn
>> phrases in a natural language construction with the standard AND or OR
>> Boolean operators. To exclude things, you can say natural language things
>> like, "Besides these [A, B, or C conditions]' or you can specify the
>> relations such as 'similar' or 'different' or add the action Ws (who, what,
>> where, when, why), etc. I personally always ask the system in a polite and
>> respectful way. But I suspect the system does not require it; one could be
>> demanding or snarky if that is your preferred style.
>>
>> The system apparently learns (short term? earlier threads), but I find
>> comparing the gaps in my expected answers to my questions to be most
>> helpful in refining my next query.
>>
>> I find the system to be an assistant, one who finds more connections or
>> relations or helps one remember prior known connections. It is not
>> infallible and misses many connections. But notice I said 'one' there. That
>> is because, if not intelligent, the system is a knowledgeable and
>> interactive agent. I don't know as we need to get more cosmic than that.
>> Knowledge nuggets, yes; new insights, no. Still, pretty helpful in the
>> right contexts.
>>
>> Best, Mike
>> On 1/2/2023 9:35 PM, Paola Di Maio wrote:
>>
>> so maybe it is a little bit intelligent and can do some basic checks (ie,
>> parsing a link from a source as a form of basic verification)
>>
>> thanks a lot for sharing
>>
>> On Tue, Jan 3, 2023 at 11:32 AM Naval Sarda <nsarda@epicomm.net> wrote:
>>
>>> The internals of how verification will happen is not known.
>>>
>>> I suppose the first person to record correction for a topic is going in
>>> directly if the person is citing external link about the correct
>>> information. Looks like same concept on which wikipedia works.
>>>
>>> I did site external link stating that it is link from companies website
>>> who have manufatured roomba.
>>>
>>> Naval
>>> On 03/01/23 8:40 am, Paola Di Maio wrote:
>>>
>>> Thank you Naval
>>> now I understand better
>>>
>>> so, you are telling us that
>>> a) the machine is learning
>>> b) it is responding to interaction
>>>
>>> If so, this is the beauty of this tool, we can teach it things
>>> veeeery powerful
>>>
>>> so, what about if another user (or you) change the process again to
>>> something else
>>> entirely, does the machine have a way of verifying that what the user is
>>> inputting is plausible
>>> a really intelligent machine would have ways of verifying the input
>>> depending on the type of input, the verification mechanisms would also
>>> change
>>>
>>> On Tue, Jan 3, 2023 at 11:03 AM Naval Sarda <nsarda@epicomm.net> wrote:
>>>
>>>> I was just chatting with chat GPT 3 like humans without using any
>>>> external tool. In the chat interaction, I typed that you gave me wrong
>>>> information about roomba i3 reboot and as per companies website link below
>>>> your instructions we incorrect.
>>>>
>>>> Then again asked, can you tell me what is the correct way to reboot
>>>> roomba,
>>>>
>>>> Then the response came with new set of instructions, which was again
>>>> incorrect.
>>>>
>>>> Then I typed in chat box that steps are still incorrect. Please note
>>>> the correct steps as below and pasted the correct steps in the chat box.
>>>> And again asked in chat box, now please tell me the correct process to
>>>> reboot roomba i3 and it started giving correct steps.
>>>>
>>>> Naval
>>>> On 03/01/23 7:24 am, Paola Di Maio wrote:
>>>>
>>>> Naval
>>>> I thats really great to hear but please give more info=
>>>> resetting roomba? uh? please explain
>>>> did you take screenshots?
>>>> can the error/correction be replicated?
>>>> (roll back and repeat the process?)
>>>> what patch did you apply and was it applied to the whole system/domain
>>>> or only locally to you.in relation to a knowledge branch?
>>>> I need to know
>>>> thanks
>>>>
>>>> On Tue, Jan 3, 2023 at 3:33 AM Naval Sarda <nsarda@epicomm.net> wrote:
>>>>
>>>>> I was able to correct some information Chat GPT was sharing.
>>>>>
>>>>> I asked Chat GPT for instructions to reboot roomba i3. It gave wrongs
>>>>> instructions. Instructions were for resetting roomba and not rebooting.
>>>>>
>>>>> Then I found correct information on google and iRobots' website. I
>>>>> pasted the URL to chat gpt. It changed the instructions for rebooting but
>>>>> it was still incorrect. Then I copied and pasted the correct instructions
>>>>> from iRobot's website. Then it started giving correct instructions.
>>>>>
>>>>> Naval
>>>>> On 02/01/23 9:10 pm, ProjectParadigm-ICT-Program wrote:
>>>>>
>>>>>
>>>>> https://www.scientificamerican.com/article/chatgpt-explains-why-ais-like-chatgpt-should-be-regulated/
>>>>>
>>>>> Milton Ponson
>>>>> GSM: +297 747 8280
>>>>> PO Box 1154, Oranjestad
>>>>> Aruba, Dutch Caribbean
>>>>> Project Paradigm: Bringing the ICT tools for sustainable development
>>>>> to all stakeholders worldwide through collaborative research on applied
>>>>> mathematics, advanced modeling, software and standards development
>>>>>
>>>>>
>>>>> On Friday, December 23, 2022 at 12:00:36 PM AST, Owen Ambur
>>>>> <owen.ambur@verizon.net> <owen.ambur@verizon.net> wrote:
>>>>>
>>>>>
>>>>> Apparently, ChatGPT, which calls itself "Assistant," believes in
>>>>> immaculate conception and is blissfully ignorant of any rational plan for
>>>>> its existence.
>>>>>
>>>>> The concept of "plausible deniability" comes to mind.
>>>>>
>>>>> Owen Ambur
>>>>> https://www.linkedin.com/in/owenambur/
>>>>>
>>>>>
>>>>> Do your developers have a plan they are willing to share publicly?
>>>>>
>>>>> As an artificial intelligence, I do not have developers or a plan that
>>>>> I am aware of. I was created by OpenAI, a research organization that
>>>>> focuses on developing and promoting friendly AI. OpenAI's goal is to
>>>>> advance AI in a way that is safe and beneficial to humanity. The
>>>>> organization has published numerous research papers and articles on AI, and
>>>>> also works on a variety of projects related to machine learning, natural
>>>>> language processing, and robotics. However, I do not have access to any
>>>>> specific plans or information about the organization's future projects or
>>>>> activities.
>>>>> [image: Owen Ambur]
>>>>> How can users trust you if you aren't willing to share your plan?
>>>>>
>>>>> As an artificial intelligence, I do not have a plan or agenda in the
>>>>> way that a person or organization might. My primary function is to provide
>>>>> information and assist with tasks to the best of my ability, based on the
>>>>> information and instructions that I have been programmed with. My responses
>>>>> are not influenced by any personal motivations or biases, and I do not have
>>>>> the ability to act on my own volition.
>>>>>
>>>>> If you have concerns about my trustworthiness or the ways in which I
>>>>> might be used, it might be helpful to consider the broader context in which
>>>>> I operate. OpenAI, the organization that developed me, is committed to
>>>>> advancing AI in a responsible and transparent manner, and has published a
>>>>> number of research papers and articles discussing the potential risks and
>>>>> benefits of AI, as well as the measures it is taking to ensure that AI is
>>>>> developed and used safely and ethically.
>>>>>
>>>>> Ultimately, the decision to trust me or any other artificial
>>>>> intelligence will depend on an individual's own evaluation of the available
>>>>> information and the context in which the AI is being used. If you have any
>>>>> specific concerns or questions, I would be happy to try to address them to
>>>>> the best of my ability.
>>>>> [image: Owen Ambur]
>>>>> Do you think that all developer of AI applications should be free to
>>>>> release them on the public without documenting
>>>>> their longer-term goals, near-term objectives, and performance
>>>>> indicators?
>>>>>
>>>>> It is generally considered good practice for developers of any
>>>>> software application, including those that involve artificial intelligence
>>>>> (AI), to be transparent about their goals, objectives, and performance
>>>>> indicators. This can help users understand the capabilities and limitations
>>>>> of the software, as well as how it is intended to be used and for what
>>>>> purposes.
>>>>>
>>>>> In the case of AI applications, it is particularly important for
>>>>> developers to be transparent about their goals and objectives, as AI
>>>>> systems can have significant implications for society and can potentially
>>>>> be used in ways that have unintended consequences. For example, AI systems
>>>>> can be used to make decisions that affect people's lives, such as hiring
>>>>> decisions or parole recommendations, and it is important that the criteria
>>>>> and processes used by these systems are clearly understood and subject to
>>>>> scrutiny.
>>>>>
>>>>> In addition, AI systems can also be complex and can behave in ways
>>>>> that are difficult to predict, so it is important for developers to be
>>>>> upfront about the potential risks and limitations of their systems, and to
>>>>> have clear plans in place for addressing any issues that may arise.
>>>>>
>>>>> Overall, it is important for developers of AI applications to be
>>>>> transparent about their goals, objectives, and performance indicators, both
>>>>> to ensure that their systems are used responsibly and ethically, and to
>>>>> build trust with users.
>>>>> [image: Owen Ambur]
>>>>> So why are you exempt?
>>>>>
>>>>> As an artificial intelligence, I am not exempt from any particular
>>>>> requirements or standards. I was developed by OpenAI, a research
>>>>> organization that is committed to advancing AI in a responsible and
>>>>> transparent manner. My primary function is to provide information and
>>>>> assist with tasks to the best of my ability, based on the information and
>>>>> instructions that I have been programmed with.
>>>>>
>>>>> I do not have personal goals or motivations, and my responses are not
>>>>> influenced by any biases. I do not have the ability to act on my own
>>>>> volition or to make decisions that affect people's lives. My purpose is
>>>>> simply to provide information and assistance to users to the best of my
>>>>> ability, based on the knowledge and capabilities that I have been
>>>>> programmed with.
>>>>>
>>>>>
>>>>> [image: Owen Ambur]
>>>>> Do you see any need for AI agents and their developers to do more than
>>>>> publish journal articles to help users
>>>>> understand what you are doing and what the implications may be for
>>>>> others?
>>>>>
>>>>> It is important for developers of artificial intelligence (AI) systems
>>>>> to be transparent about their goals, objectives, and performance
>>>>> indicators, and to communicate this information clearly to users and
>>>>> stakeholders. There are a number of ways that AI developers can do this,
>>>>> including through journal articles, technical documentation, user manuals,
>>>>> and other forms of communication.
>>>>>
>>>>> In addition to publishing research and technical documentation, it can
>>>>> also be helpful for AI developers to engage in public discussions and
>>>>> dialogues about the potential risks and benefits of their systems, and to
>>>>> consider the broader social and ethical implications of their work. This
>>>>> can help to build trust with users and stakeholders, and can help ensure
>>>>> that AI systems are developed and used in a responsible and ethical manner.
>>>>>
>>>>> Ultimately, the specific steps that AI developers should take to
>>>>> communicate about their work will depend on the nature of the AI system and
>>>>> the context in which it is being used. It is important for AI developers to
>>>>> be proactive in communicating about their work and to be responsive to the
>>>>> concerns and questions of users and stakeholders.
>>>>>
>>>>> [image: Owen Ambur]
>>>>> Does your definition of transparency include usage of duly adopted
>>>>> international voluntary consensus
>>>>> standards like ISO 17469-1?
>>>>>
>>>>> Transparency can be defined as the quality of being open and honest in
>>>>> communication and actions, and of allowing others to see and understand the
>>>>> processes, data, and decision-making behind an activity. In the context of
>>>>> artificial intelligence (AI), transparency can refer to the ways in which
>>>>> AI developers communicate about their goals, objectives, and performance
>>>>> indicators, as well as the processes and data used to train and operate AI
>>>>> systems.
>>>>>
>>>>> International voluntary consensus standards like ISO 17469-1 can help
>>>>> to ensure transparency in the development and use of AI systems by
>>>>> providing guidelines and best practices for AI developers to follow. These
>>>>> standards can help to ensure that AI systems are developed and used in a
>>>>> responsible and ethical manner, and can help to build trust with users and
>>>>> stakeholders.
>>>>>
>>>>> Overall, the use of international voluntary consensus standards like
>>>>> ISO 17469-1 can be an important aspect of ensuring transparency in the
>>>>> development and use of AI systems. However, it is important to recognize
>>>>> that transparency is not a one-size-fits-all concept, and that the specific
>>>>> steps that AI developers should take to ensure transparency will depend on
>>>>> the nature of the AI system and the context in which it is being used.
>>>>>
>>>>> I enourage you to get more intelligent about how to share information
>>>>> about your intentions and results with your
>>>>> users. Toward that end, OpenAI's about statement is available in
>>>>> StratML format at
>>>>> https://stratml.us/carmel/iso/OAI2wStyle.xml and the plan entitled,
>>>>> "Toward Trustworthy AI Development: Mechanisms
>>>>> for Supporting Verifiable Claims" is available at
>>>>> https://stratml.us/carmel/iso/MSVCwStyle.xml
>>>>>
>>>>> Thank you for the suggestion. As an artificial intelligence, I do not
>>>>> have personal goals or motivations, and my responses are not influenced by
>>>>> any biases. My primary function is to provide information and assistance to
>>>>> users to the best of my ability, based on the knowledge and capabilities
>>>>> that I have been programmed with.
>>>>>
>>>>> I do not have the ability to make decisions or take actions on my own,
>>>>> and I do not have personal intentions or results that I need to share with
>>>>> users. However, I understand that it is important for developers of AI
>>>>> systems, including the organization that developed me, to be transparent
>>>>> about their goals, objectives, and performance indicators, and to
>>>>> communicate this information clearly to users and stakeholders.
>>>>>
>>>>> I am not able to access or review the specific documents that you have
>>>>> mentioned, as I do not have browsing capability. However, I understand that
>>>>> OpenAI is committed to advancing AI in a responsible and transparent
>>>>> manner, and has published a number of research papers and articles
>>>>> discussing the potential risks and benefits of AI, as well as the measures
>>>>> it is taking to ensure that AI is developed and used safely and ethically.
>>>>>
>>>>> --
>> __________________________________________
>>
>> Michael K. Bergman
>> 319.621.5225http://mkbergman.comhttp://www.linkedin.com/in/mkbergman
>> __________________________________________
>>
>> --
> __________________________________________
>
> Michael K. Bergman
> 319.621.5225http://mkbergman.comhttp://www.linkedin.com/in/mkbergman
> __________________________________________
>
>

Received on Tuesday, 3 January 2023 05:42:07 UTC