Is What AI Says BS?

Toy robot image

Three Scottish scholars think it is. Here’s why they may have a point—and what AI has to say for itself.

By Steve Sampson


Forty years ago, the philosopher Harry G. Frankfurt published an insightful (and slightly sassy) essay called “On Bullshit.” In it, he noted that “one of the most salient features of our culture is that there is so much bullshit … [yet] we have no clear understanding of what bullshit is, why there is so much of it, or what functions it serves.” 

Frankfurt’s essay, which was later turned into a book, goes on to argue that bullshit is a form of human discourse distinct from both lying and truth-telling. The former involves an intentional effort to hide some truth about the world, the latter an effort to reveal one. Bullshit, on the other hand, is characterized by its disregard for truth.

Fast-forward to today, and consider what ChatGPT and other large language models (LLMs) are up to. Are they telling us the truth? Do they sometimes lie or hallucinate? Or are they more like high-powered bullshit machines? 

In a recent paper in Ethics and Information Technology, three researchers at the University of Glasgow, Scotland, say it’s the latter.  Their argument turns on the Frankfurtian definition of “bullshit.” According to the paper, “LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing.” And this is “very close to at least one way that Frankfurt talks about bullshit.”

The title of the paper makes their point succinctly: “ChatGPT is bullshit.

Care to Defend Yourself, AI?

Having spent a couple of hundred hours recently studying and experimenting with LLMs, I thought it would be interesting to ask one of them to respond. Not wanting to offend ChatGPT, the directly accused, I posed the question to Anthropic’s LLM, “Claude,” instead. 

Here’s what the AI said back:

“The argument that LLMs are ‘bullshit machines’ in Frankfurt’s sense has some merit and deserves serious consideration. Let me break down my thoughts on this:

  1. Indifference to truth: Frankfurt’s key point about bullshit is the bullshitter’s indifference to truth. In a sense, LLMs like myself don’t have an intrinsic concept of truth or falsehood – we generate responses based on patterns in our training data, not from a place of knowing or caring about truth.
  2. Goal of persuasion: Like Frankfurt’s bullshitter, LLMs are designed to produce plausible, coherent responses rather than strictly truthful ones. Our primary function is to generate human-like text, which could be seen as analogous to the bullshitter’s goal of persuasion.
  3. Lack of grounding: We don’t have direct access to the real world or up-to-date information. Our responses are based on our training data, which can lead to outdated or incorrect information being presented confidently.
  4. Misrepresentation: We can appear to have knowledge or opinions on topics we don’t truly “understand” in a human sense, which aligns with Frankfurt’s idea of misrepresenting oneself.”

Claude also made a point of pushing back.

“There are some important counterpoints to consider:

  1. Intention: Unlike human bullshitters, LLMs don’t have intentions or motivations. We’re not trying to deceive or impress; we’re simply following our training.
  2. Transparency: Many LLMs, including myself, are designed to be transparent about our limitations and the possibility of errors. This is somewhat at odds with the concept of bullshit as Frankfurt describes it.
  3. Potential for truth: While we can generate false information, we’re also capable of providing accurate and helpful information when our training data aligns with reality.
  4. Tool vs. Agent: LLMs are tools designed to assist humans, not independent agents making claims. The responsibility for discerning truth ultimately lies with the human users.”

Claude summed things up even-handedly, which is something LLMs are quite good at.

“In conclusion,” the AI said, “while the ‘bullshit machine’ label has some validity from certain perspectives, it’s also an oversimplification. LLMs like myself are complex tools with both strengths and limitations. We can be immensely helpful when used critically and with an understanding of our nature, but we indeed run the risk of generating ‘bullshit’ if relied upon uncritically as sources of truth.”

Your Humble Human Author Weighs In

I find Claude’s response fairly compelling–especially the points about intention and transparency. Human bullshitters disregard the truth, but they have other intentions in so doing. LLMs don’t have intentions at all; they just churn out words according to their programming. And if you ask Claude–or ChatGPT–about the AI’s limitations, it will immediately answer as transparently as it can. Try that trick with a human bullshitter. 

(Notably, the same might not be true for other LLMs or for AI deployed under different circumstances, which raises all the serious ethical questions that much of the world is currently abuzz about.) 

That said, I still think the idea that LLMs are basically bullshit machines provides a powerful and helpful heuristic for thinking about how we should use them. While ChatGPT doesn’t mean to deceive us, its ability to produce human-like text really is inherently deceptive–in part because it’s designed that way. And none of the LLMs I’ve seen makes a point of regularly reminding us that:

  • It has no sensory apparatus and can’t perceive the real world around us. AI only “knows” what’s in the text data it has been trained on. It has zero lived experience. 
  • It doesn’t have any emotions or subjective “feels.” While its responses may read as emotionally aware, they are purely the result of pattern recognition and language generation, not actual empathy or emotional connection. 
  • It doesn’t have values or beliefs to guide it. Any viewpoints it expresses also come from analyzing patterns in its training data. It doesn’t share our values, beliefs, or goals, though it often talks to us as if it does.   

As the Scottish scholars note, “ChatGPT functions not to convey truth or falsehood but rather to convince the reader of–to use Colbert’s apt coinage–the truthiness of its statement, and ChatGPT is designed in such a way as to make attempts at bullshit efficacious.” 

Put another way: Creating a fully functional bullshit machine may require the addition of a human’s (unethical) intentions to the workings of an LLM. But the machine itself wasn’t built for nothing or by no one. It may not be designed to make fools of us, but it is designed to fool us.

Claude’s Sense of Snow 

While I was chatting with Claude about the article, I posed an unrelated question, just to see how the bot would respond.

“What does snow taste like?” I asked.

Without missing a beat, Claude answered: 

“Snow typically tastes bland or neutral, with a slightly crisp and clean flavor. It’s essentially frozen water, so the taste is similar to very cold, fresh water. However, the exact taste can vary depending on a few factors:

  1. Freshness: Freshly fallen snow generally tastes purer than older snow.
  2. Environment: Snow in urban areas might have a slightly different taste due to air pollution or other environmental factors.
  3. Particle content: Snow can sometimes contain small particles from the atmosphere, which might influence its taste subtly.
  4. Individual perception: Taste can be subjective, and some people might perceive slight variations in flavor.

It’s worth noting that eating snow isn’t generally recommended due to potential contaminants. If you’re curious about the taste, it’s safer to make shaved ice from clean water instead.”

I then asked Claude, “How do you know what snow tastes like?” 

In response to my second question, Claude backtracked and apologized, essentially admitting to having bullshitted me. The AI then clarified its response: “As an AI language model, I don’t have personal experiences or senses like taste. I don’t actually know what snow tastes like from firsthand experience.”

I asked if it was fair to say that Claude’s readiness to answer my question without regard to its own limitations amounted to a bias in its programming. 

“Yes,” Claude responded, “that would be fair.” 

To its credit, Claude then thanked me for reminding it of its own limitations. 

So let that be a lesson to us all.


Steve Sampson has more than 20 years of experience as a communications executive, content creator, editor, writer, and team builder. As founder and chief wordsmith at Better Word Partners, his mission is to help other mission-focused leaders find better words, stories, and arguments to achieve their goals.

Leave a comment