
Lately, Iโve been reading AI Snake Oil by Arvind Narayanan and Sayash Kapoor, and itโs been quite an insightful ride. The book dives deep into both the promises and pitfalls of AI, and it got me reflectingโagainโon a question Iโve explored before: Does ChatGPT really understand human language? And if it does, how?
I already touched on this from a technical perspective in my book ChatGPT for Teachers, but Narayanan and Kapoorโs take adds another layer to the discussionโone thatโs both eye-opening and thought-provoking.
The “Bullshitter” Analogy
One of the most striking lines in the book is this:
โPhilosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. In this sense, chatbots are bullshitters. They are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic.โ (p. 139)
That comparison stopped me in my tracks. I mean, Iโve written before about how AI doesnโt think like we do, and how its understanding of language is purely statisticalโbut framing it this way makes it crystal clear.
ChatGPT doesnโt process meaning the way humans do. Instead, it sees language as a web of interconnected tokens, each statistically predicted based on the one before it. When you ask it a question, it doesnโt “think” about the answerโit calculates the most probable next word based on trillions of past examples.
To put this in perspective: according to Narayanan and Kapoor, just generating one word requires about a trillion arithmetic operations. A poem with a few hundred words? Thatโs a quadrillion calculations. Let that sink in for a moment. Itโs mind-blowingโand it wouldnโt even be possible without the insane power of modern GPUs.
But What About “Understanding”?
So, does ChatGPT understand language? Not in the way we do. Human language is more than just grammar and syntaxโitโs layered with meaning, intent, culture, and context. Thatโs why a sentence like Noam Chomskyโs famous “Colorless green ideas sleep furiously” is grammatically correct but semantically nonsensical. An AI might recognize it as a proper sentence, but does it grasp why it doesnโt make sense in human conversation?
Surprisingly, the answer seems to be yesโto some extent. As Narayanan and Kapoor put it:
โUnderstanding is not all or nothing. Chatbots may not understand a topic as deeply or in the same way as a personโespecially an expertโmight, but they might still understand it to some useful degree.โ (p. 137)
In other words, ChatGPT isn’t clueless. It can recognize patterns, structure language coherently, and produce text that feels meaningful. Otherwise, its responses would just be random gibberish. And letโs be honestโGPT-4โs output is often eerily human-like.
The Hidden Depths of Neural Networks
One of the more fascinating points in AI Snake Oil is that ChatGPT wasnโt explicitly trained on grammar or syntax. And yet, through its deep neural networks, it has somehow learned to understand language structure and even nuances that its developers didnโt directly teach it.
The authors explain this phenomenon as follows:
โChatbots โunderstandโ in the sense that they build internal representations of the world through their training process. Again, those representations might differ from ours, might be inaccurate, and might be impoverished because they donโt interact with the world in the way that we do. Nonetheless, these representations are useful, and they allow chatbots to gain capabilities that would be simply impossible if they were merely giant statistical tables of patterns observed in the data.โ(p. 138)
This is where things get both fascinating and a little unsettling. ChatGPTโs ability to grasp language isnโt fully understoodโnot even by the engineers who built it. AI luminaries like Geoffrey Hinton have openly admitted that we donโt yet have a clear picture of how deep neural networks develop their internal representations.
The Real Question
So, does ChatGPT understand human language? Itโs not a simple yes-or-no answer. Itโs a how question.
If by “understand” we mean the ability to produce grammatically coherent, contextually relevant text, then yes, ChatGPT passes the testโat least at an average level. But if we mean the kind of deep, human-like comprehension that involves experience, emotions, and cultural awareness, then noโAI still falls short.
And hereโs the bigger, more intriguing issue: even as AI gets more advanced, our understanding of how it really works lags behind. As Narayanan and Kapoor point out, far more research goes into building AI than into reverse-engineering its inner workings. That gap is only growing, and it raises some big questions about trust, transparency, and the future of AI in human communication.
What do you think? Does ChatGPT “understand” in a way that matters? Or is it just a really sophisticated guesser that mimics meaning without actually grasping it?
References
- Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference (Kindle Edition). Princeton University Press
- 60 Minutes. (2023, March 25). “Godfather of AI” Geoffrey Hinton: The 60 Minutes Interview [Video]. YouTube: https://www.youtube.com/watch?v=qrvK_KuIeJk&ab_channel=60Minutes
- Chomsky, N. (1957). Syntactic structures. Mouton.