As with many others I have been playing around with ChatGPT recently. If you are not familiar with it, here is the description of ChatGPT from its creators, OpenAI:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

https://openai.com/blog/chatgpt/

One danger of the language model that has been discussed is that it will often give information confidently even if that information is incorrect. For example, Stack Overflow does not currently allow ChatGPT, and other generative AI tools, generated answers to questions, partially for this reason:

The primary problem is that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce. There are also many people trying out ChatGPT and other generative AI technologies to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with significant subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

https://meta.stackoverflow.com/questions/421831/policy-generative-ai-e-g-chatgpt-is-banned

I don’t know much about conversional language models, but the level of confidence conveyed by the language used in the model’s responses seems like something that could be tuned over time.

Edit: As of March 1, 2023, Bing is allowing some users to “toggle between ‘Precise,’ ‘Balanced’, or ‘Creative’ tones for responses” when interacting with its OpenAI powered chatbot, according to this article.

Another stated reason that StackOverflow does not allow generative AI generated responses to questions is that generative AI tools do not always properly cite sources for information:

Since then, quite a lot has happened. Based on the voting for this question, it’s clear that there’s an overwhelming consensus for this policy. The company has chosen that the specific policy on AI-generated content will be up to individual sites (list of per-site policies), but that even on sites which permit AI-generated content, such AI-generated content is considered “not your own work” and must follow the referencing requirements. The requirement for following the referencing requirements was, later, put into the Code of Conduct: Inauthentic usage policy. There’s a lot more that’s gone on with respect to AI-generated content. So much has happened such that it’s not reasonable to try to summarize all of it here.

https://meta.stackoverflow.com/questions/421831/policy-generative-ai-e-g-chatgpt-is-banned

This seems like a much thornier issue, since it seems like would be difficult (or maybe impossible) to determine the sources of training data that led to any given response generated by the ChatGPT model. This issue of proper citation has also been raised in regards to GitHub copilot’s generated code suggestions.

Edit (4/28/2024): Since I originally published this blog post, there have been many lawsuits related to generative AI filed. For example, the New York Times filed a lawsuit against OpenAI and Microsoft alleging copyright infringement in December of 2023 (source). Groups of authors have also sued OpenAI and Microsoft for copyright infringement. One such lawsuit filed in November of 2023 is described in this Reuters article. Another similar lawsuit filed in September of 2023 is described in this Axios article.

It will be interesting to see how these issues of confidence and proper source citation play out over the coming years as AI tools continue to evolve.