2024 Calendar
TechTalk Daily

AI Hallucinations, Bias and Lies: Why We Need to Stop Ascribing Human Behavior and Attributes to AI

AI Hallucinations, Bias and Lies: Why We Need to Stop Ascribing Human Behavior and Attributes to AI

By: Daniel W. Rasmus for Serious Insights

The new era of AI has made a mistake. Makers, users, marketers, critics, and academics have ascribed human attributes to AI, overloading understood terms. When we overload terms, we lose the ability to differentiate without context. Large Language Models (LLMs) do not hallucinate in the way a human hallucinates. The aberrant behavior they display comes from a completely different source. 

AIs do not lie. They may offer incorrect information, but they do so without the intent to deceive. They simply represent the data culled together and weighted in response to a human prompt. The source of incorrect information could arise from multiple places in the query chain. It cannot arise from the AI deciding to tell an untruth with the purpose of deceiving the person asking it a question.

And while AI can demonstrate bias, it does so because of errors in its data, its guardrails or other elements of its construction. Generative AI reflects human bias; it does not hold a bias of its own. It is possible for an LLM to be constructed intentionally with a biased point of view. We have not seen such a system yet, but it is technically possible for those aligned with an ideology to train an LLM to respond only based on the ideology’s beliefs if those beliefs make up the LLM’s primary training set. 

If such a purposefully biased system were intentionally designed, the bias would arise from human actors, not from the AI. The AI, prior to training, is completely agnostic to any concept. It has no religion, no race, and no cultural attribution. An untrained AI has no historical background of experiences that shape its processing, no childhood trauma, no economic disadvantage, and no history of physical punishment or manipulative reward. As it obtains data the AI may find correct responses reinforced, but that reinforcement comes from humans, or at least from human data, it does not arise intrinsically from the AI’s ability to build a moral structure and differentiate right from wrong, correct from incorrect.

Hallucinations

We do a disservice to AI’s potential by not creating a context for understanding its actual behavior. When we talk of hallucinations, we force people to battle through widely accepted definitions of hallucinations, such as their association with drug use or the degradation of mental faculties. Those types of “hallucinations” derive from different mental mechanisms, though, to people experiencing them. 

AI Hallucinations, Bias and Lies. An abstract of the concept generated by DALL-E 3.

Hallucinations, in humans, are a form of altered perception. AI has no sense of self, no shared reality, no basis for its perceptions, and therefore, it cannot “hallucinate” in the way humans do. The similarity of incorrect responses that either verge on or are pure gibberish derives from a lack of data. Because the pattern recognition algorithm can find no statistically valid response to the prompt, it puts out the next best response within in data, which may not only be a poor match for the prompt but contain strings of tokens only in the most distant mathematical sense have any relationship to the prompt.

As far back as 1932, Hughlings Jackson1 “suggested that hallucinations occur when the usual inhibitory influences of the uppermost level are impeded, thus leading to the release of middle-level activity, which takes the form of hallucinations.”2

The information in the human brain is stored in many different ways, but we most often rely on a cognitively constrained approach to communications and behavior. We exist in a consensus reality. I don’t want to go too far down the path of exploring human hallucinations, but the analysis that I conducted puts hallucinations into a category of too much or additional information, not a lack of information. Hallucinations occur when people tap into something beyond the constraints that usually govern behavior.

AIs do not “hallucinate.” We need to come up with another word for the technical deficiencies that lead to their inaccuracies rather than label them with human conditions that already have means, symptoms and treatments…

To learn more about how AI reflects bias and how AI can create lies through misinformation, check out the rest of the article on SeriousInsight.com: AI Hallucinations, Bias and Lies: Why We Need to Stop Ascribing Human Behavior and Attributes to AI. 

 

About the author:

Daniel W. Rasmus, the author of Listening to the Future, is a strategist and industry analyst who has helped clients put their future in context. Rasmus uses scenarios to analyze trends in society, technology, economics, the environment, and politics in order to discover implications used to develop and refine products, services, and experiences. He leverages this work and methodology for content development, workshops, and for professional development.

Interested in AI? Check here to see what TechTalk AI Impact events are happening in your area.