AI HALLUCINATIONS
December 17, 2025
Whether one likes it or not, it is undeniable that AI image generation is a significant part of our daily lives. Flooding our social media pages with realistic yet uncanny images, replacing illustration jobs, sneaking into art competitions, and more… Even though AI-generated images seem unavoidable, the easiest way to differentiate them is by spotting the “mistakes” in them. These mistakes are commonly known as “hallucinations,” and they are a unique feature of AI that happens because of how AI processes information.
This essay is a forensic exploration of AI hallucinations in diffusion models: what they are, how they work, how they have evolved, and what they reveal about the potential creativity and biases of AI.
In order to reveal the scope of AI hallucinations, experiments were conducted in the text-to-image generation features across five engines: Nano Banana w/Gemini, Stable Diffusion, Raphael AI, Flux, and ChatGPT. The objective of the experiments was to provide insight into how AI uniquely recognises patterns when predicting and illustrating “what not to do” to use AI effectively.