Since ChatGPT was released on November 30, 2022, “Artificial Intelligence” or “AI” has been in the news frequently. I’ve often been frustrated when reading news articles and blog posts that don’t define the term, since it can mean different things in different contexts.

When discussing ChatGPT in particular, “AI” is often used synonymously with large language models (LLM) and adjacent technologies (typically used to generate content from user prompts or similar). For example, in this article introducing ChatGPT from OpenAI, published on November 30, 2022, the term “AI” is used in approximately this way.

Today’s research release of ChatGPT is the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems.

In contrast, this article from the Atlantic, published on August 10, 2024 and titled We’re Entering an AI Price-Fixing Dystopia, uses the term “AI” in reference to the algorithms that RealPage property management software uses to suggest rent prices to their customers. However, there are no details about how this price recommendation algorithm is using “AI” technology.

As far as I can tell from the article, the algorithm could simply be taking in a few different parameters, like vacancy rates and competitor rental unit prices, and spitting out a recommended price using a relatively simple machine learning model. However, this model (if it exists – the article doesn’t provide details about how the price recommendation algorithm is implemented), is likely much less complex than the models that are used to power generative AI technologies. OpenAI and other companies working on large language models don’t always release the number of parameters used in any given model, but ChaptGPT 3.5 likely has on the order of many of billions of parameters.