As with many others I have been playing around with ChatGPT recently. If you are not familiar with it, here is the description of ChatGPT from its creators, OpenAI:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

https://openai.com/blog/chatgpt/

One danger of the language model that has been discussed is that it will often give information confidently even if that information is incorrect. For example, Stack Overflow does not currently allow GPT generated answers to questions, partially for this reason:

The objective nature of the content on Stack Overflow means that if any part of an answer is wrong, then the answer is objectively wrong. In order for Stack Overflow to maintain a strong standard as a reliable source for correct and verified information, such answers must be edited or replaced. However, because GPT is good enough to convince users of the site that the answer holds merit, signals the community typically use to determine the legitimacy of their peers’ contributions frequently fail to detect severe issues with GPT-generated answers. As a result, information that is objectively wrong makes its way onto the site. In its current state, GPT risks breaking readers’ trust that our site provides answers written by subject-matter experts.

https://stackoverflow.com/help/gpt-policy

I don’t know much about conversional language models, but the level of confidence conveyed by the language used in the model’s responses seems like something that could be tuned over time.

Edit: As of March 1, 2023, Bing is allowing some users to “toggle between ‘Precise,’ ‘Balanced’, or ‘Creative’ tones for responses” when interacting with its OpenAI powered chatbot, according to this article.

Another stated reason that StackOverflow does not allow GPT generated responses to questions is that GPT does not properly cite sources for information (emphasis mine):

Stack Overflow is a community built upon trust. The community trusts that users are submitting answers that reflect what they actually know to be accurate and that they and their peers have the knowledge and skill set to verify and validate those answers. The system relies on users to verify and validate contributions by other users with the tools we offer, including responsible use of upvotes and downvotes. Currently, contributions generated by GPT most often do not meet these standards and therefore are not contributing to a trustworthy environment. This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.

https://stackoverflow.com/help/gpt-policy

This seems like a much thornier issue, since it seems like would be difficult (or maybe impossible) to determine the sources of training data that led to any given response generated by the ChatGPT model. This issue of proper citation has also been raised in regards to GitHub copilot’s generated code suggestions.

It will be interesting to see how these issues of confidence and proper source citation play out over the coming years as AI tools continue to evolve.