AI and the “Cliff Clavin Problem”

On the TV show Cheers, Cliff Clavin was the character who worked very hard to say sophisticated sounding things but who most of the time was just making up facts. AI, specifically large language models (LLMs), have a Cliff Clavin problem.

As much as I love LLMs, and I do, I worry because we’re increasingly integrating and becoming dependent on a technology which is neither inherently factual nor truthful, and the use of which is demonstrably reducing our own ability to write and to think.

We talk about certain percentage of LLM output being “hallucinations,” but the truth is all large language model output is fabricated–that is, created by algorithms that arrange words based on probabilistic patterns found in their training data. They haven’t stored information like an encyclopedia, so when we say that 30% of their output is not accurate or true that doesn’t mean that the other 70% is entirely accurate or is the result of any calculated reasoning–it’s just close enough to the material that it’s been trained on that we consider it to be “true.”

Because of increasing discussions around artificial general intelligence and the desire to achieve it, large language models are being trained to mimic the evidence of intelligence through reasoning steps. Even those reasoning steps, and then the models’ refined output from them, are still based on language probabilities. It’s a brilliant advancement, but the LLMs are still not capable of doing actual reasoning. And again, what we call accurate or factual output is just output that conforms to the majority beliefs about what’s accurate and true that the model was trained on.

And so especially in cases where the consensus opinion reflected in the training data is wrong, LLMs have to be coaxed into providing independent information and cannot reason through evidence. As flawed and imperfect as human thinking is, and as susceptible and influenced as we are by the opinions of the crowd around us, the ability that we have to reason is the heart of human progress. It’s already hard enough to reason and think rationally, and now we’re increasingly going to be flooded with fluent and authoritative sounding content generated by artificial intelligence, but which may or may not be accurate or truthful.

Moving forward we’re going to need the ability to resist the temptation to depend on AI output in much the same way that we recognize that food that is manufactured to be delicious to us usually makes it harder for us to be healthy, or that the dopamine hits from social media scrolling rob us of time, energy, and the ability to focus. If we think those are challenging temptations, take the fluid output from LLMs, and an attractive and photo/video-realistic avatar, and then customize the interactions based on the learned psychographic profile of the individual user. 

Now we have a very real problem to grapple with.

View the original article and our Inspiration here

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top