Oops. Gotta revise my summer reading list. Those exciting offerings plucked from a special section of The Chicago Sun-Times newspaper and reported last week don’t exist. The freelancer who created the list used generative artificial intelligence for help and several of the books and many of the quotes that gushed about them were made up by the AI.

These are the most recent and high-profile AI hallucinations to make it into the news. We expect growing pains as new technology matures but, oddly and perhaps inextricably, that problem appears to be getting worse with AI. The notion that we can’t ensure that AI will produce accurate information is, uh, “disturbing” if we intend to integrate that product so deeply into our daily lives that we can’t live without it. The truth might not set you free, but it seems like a prerequisite for getting through the day.

An AI hallucination is a phenomenon by which a large language model (LLM) such as a generative AI chatbot finds patterns or objects that simply don’t exist and responds to queries with nonsensical or inaccurate answers. There are many explanations for these hallucinations — bad data, bad algorithms, training biases — but no one knows what produces a specific response.