AI Dumb and Dumber?
It turns out that just like humans, AI models can get dumber over time. As their mental faculties deteriorate over time, in a way like dementia, called “model drift”.
Image by Amelia Kinsinger
Helen Gu, founder and CEO of InsightFinder speaking to Tech Brew said;
Model drift is hard to detect because it’s not something you can actually clearly describe using one metric, and it’s basically a running metric...
Also like humans AI can just plain make things up. “Hallucinating”—AIs’ tendency to spontaneously make up things—continues to be a problem. Amit Paka, founder of the AI technical services firm Fiddler, said to Tech Brew;
In 2023, the focus was around communicating that there is something called hallucination and why it could happen...Now the conversation is more around, ‘What’s the quality of the hallucination metric that you have?’
Now the arrival of generative AI has complicated the chore of smoothing our issues: if an AI hallucinates or malfunctions in other ways, it’s much harder to look under the hood and figure out what went wrong.
When a company creates an AI, the system is trained on what the developers think people will ask, Paka explained. However, what people actually do ask often strays from the blueprint. As an AI is pushed out of its comfort zone, its chances of making mistakes rise.
To keep an AI from dementia, Fiddler has built an AI whose sole function is to measure how closely a generative AI’s response to an inquiry matches the material it was trained on. At a deeper level, once in a while an AI will get “lazy”, users have complained.
Instead of hallucinating, the models just don’t work very well. OpenAI’s workhorse ChatGPT and Anthropic’s Claude have all been anecdotally mentioned.
However, what with billions of parameters to check to diagnose laziness, any solution with today’s tools is pretty much impossible.
This is despite the new science of “mechanistic interpretability,” which tries to map which nodes in an AI match to specific concepts in an AI system but can cause AIs to become confused.
Now, more and more of the data on the Internet used to train new AIs has, itself, been generated by AIs! It’s been found that models that have been trained on a significant amount of AI-generated content can come down with an illness known as “model collapse”.
But all is ok so long as we don’t let them take over running our systems for us – they’ll turn out as bad as us if we let them!