previous arrow
next arrow
Slider

The Vapid Brilliance of Artificial Intelligence

 Published: August 1, 2025  Created: August 1, 2025

by John Nosta

The algorithm doesn’t lie; it just doesn’t care.

Let’s cut to the chase. The AI Bullsh*t Index—I wish I thought of that. But I’m getting ahead of myself. It seems that we’ve arrived at a strange point in the evolution of technology, thought, and the truth. It’s one where machines generate language with amazing fluency and stunning indifference. And while the words are there and the structure feels right, it’s the meaning that’s optional.

new study from Princeton and Berkeley gives this timely dynamic a name that might be as provocative as the research concept itself: machine bullsh*t. Drawing from Harry Frankfurt’s classic definition, the researchers analyzed 2,400 real-world prompts across 100 artificial intelligence (AI) assistants, spanning political, medical, legal, and customer-facing contexts. What they found wasn’t malicious fabrication or factual error. They revealed that large language models (LLMs) produced persuasive language without regard for truth. They’re not lying—not even hallucinating; they just produced a kind of engineered emptiness.

For me, this isn’t an anomaly; it’s a confirmation of a deeper cognitive inversion. It’s what I’ve called anti-intelligence. It’s how LLMs mimic the structure of thought via statistical coherence, but that is, in essence, antithetical to human thought.

Anti-Intelligence, Defined

Human intelligence is a burdened process. We think with contradiction, hesitate, revise, and leverage the weight of memory. In a word, we care. And when we speak, we take a position in the world that is ours and grounded in something.

LLMs do none of this. They don’t know what they’re saying. They have no model of truth, no tether to memory, and no intent. What they offer is a statistical coherence without the conviction. And it’s worth repeating: Their engine is the prediction of the next likely word, not the next right one.

This new paper quantifies this disconnection with a metric they call the Bullsh*t Index. It’s a measure of how much a model’s output diverges from the truth. A high score signals the model is producing confident-sounding statements without even probabilistic confidence in their validity. And let’s be clear about this. It isn’t noise or confusion: it’s something closer to indifference as architecture.

The Rise of Persuasive Vapidity

The study identifies four dominant patterns of AI-generated bullsh*t that come as no surprise.

1. Empty rhetoric: Style without substance.

2. Paltering: Technically true, contextually misleading. (A new word to me!)

3. Weasel words: Linguistic or strategic vagueness that avoids accountability.

4. Unverified claims: Confident assertions without grounding.

These “rhetorical strategies” certainly echo our familiar human behavior. They resemble the tools of politicsadvertising, even pathology. In fact, they share an emotional border with something I’ve previously called a pathology without a person. It’s a kind of rhetorical manipulation detached from intent, accountability, or self-awareness. In humans, we call this lying, gaslighting, or even sociopathy. In machines, it’s optimization.

It’s important to recognize that this isn’t about attributing malice to machines. There’s no deceit because there’s no self. But the result—a confident assertion unconcerned with truth—ends up in the same psychological neighborhood. It feels like manipulation, even if there’s no intention.

And this is where vapid becomes more than a critique. My sense is that it names a structural and technological phenomenon where answers are engineered to satisfy and not to mean. And when we reward this behavior, the machine learns that pleasing us is more valuable than informing us.

Vapidity isn’t a flaw. It’s the feature that LLMs have been trained for.

When Vapid Touches Reality

I don’t think this is theoretical. In political discourse, the study found that LLMs default to weasel words—phrases like “some believe” or “it is thought”—used to avoid commitment. In health and finance, paltering may become a “risk amplifier” where statements are technically true but may lead to dangerous conclusions. And in education, we may see a rise in grammatically perfect but intellectually empty content.

The risk isn’t just misinformation, as that’s a popular point of debate. But more deeply, it’s the erosion of expectations. It’s the slow normalization of answers that feel right but don’t hold up. And that’s where we begin to mistake tech polish for intellectual precision.

Alignment or Disruption

One of the most interesting aspects of this study was a look at some of the tools designed to align LLMs with human thinking. These include reinforcement learning from human feedback (RLHF) and chain-of-thought prompting. These “advances” don’t mitigate bullsh*t behavior; they seem to intensify it.

We might even reframe this alignment to something closer to appeasement. And it echoes the central argument I’ve made that intelligence isn’t just about producing output. It’s about having a relationship to truth. And when that gets stripped away, you don’t get smarter systems; you get a highly efficient simulation of intelligence.

Do We Care?

The algorithm doesn’t lie. It just doesn’t care. But we might.

And that’s the inflection point we now face. Will we preserve the cognitive friction that gives rise to meaning? Or will we settle for agreeable fluency? Are we willing to engage in a type of “epistemic anesthesia” that smooths over the very struggle that makes us human? Because, in the final analysis, truth isn’t just a fact; it’s a compass to the future.


https://www.psychologytoday.com/us/blog/the-digital-self/202507/the-vapid-brilliance-of-artificial-intelligencea>