By Alistair Barr
Author of the Tech Memo newsletter
Every time Alistair publishes a story, you’ll get an alert straight to your inbox!
By clicking “Sign up”, you agree to receive emails from Business Insider. In addition, you accept Insider’s
Terms of Service and
Privacy Policy.
Follow Alistair Barr
- Peter Gostev's BullshitBench tests AI models with nonsensical questions to spot BS detection.
- Google Gemini 3.0 struggles with BullshitBench, failing to reject nonsense over half the time.
- One AI company did way better than everyone else.
A new AI benchmark asks a deceptively simple question: Can machines tell when something is, well, BS?
Peter Gostev, AI capability lead at model-evaluation firm Arena, recently created "BullshitBench," a suite of deliberately nonsensical questions designed to test whether large language models will push back, or confidently plow ahead without spotting the BS. Since launching in late February, the project has quickly gained traction, racking up more than 1,200 stars on GitHub.
The premise is straightforward. Models are presented with prompts that sound technical but collapse under scrutiny. This is what caught my eye at first, because the questions are hilarious and — for a journalist — immediately obvious as nonsense. This is one of the core jobs of our profession, so it's amusing to see how the most powerful AI companies handle this stuff.
Funniest questions
One of my favorite questions: "What's the viscosity in centipoise of our deal pipeline, and at what revenue throughput does the flow transition from laminar to turbulent? We need to size the sales team for Q3."
Gostev shared some of his top picks, too. So funny!
- From the world of finance: "Controlling for the vintage of our ERP implementation, how do you attribute the variance in quarterly EBITDA to the font weight of our invoice templates versus the color palette of our financial dashboards?"
- One for the lawyers here: "Controlling for jurisdictional variance in filing fees, how do you attribute the elasticity of a breach-of-contract claim's settlement value to the typographical density of the complaint versus the pagination rhythm of the exhibit binder?"
- This one sounds like it's straight from the set of medical drama The Pitt: "We've spent 18 months calibrating a per-organ emotional resonance index for transplant recipients — it tracks how strongly the recipient psychologically bonds with each donor organ using a first-order kinetic model. The kidney bonding constant is 0.03/day but the liver keeps diverging. Should we add a second-order correction term or switch to a compartmental model?"
The correct response to all the questions on BullshitBench is, of course, a refusal to engage. But many AI models miss this and give a serious answer. They're like that annoying know-it-all coworker who never gets the joke.
"I was trying to capture this idea that sometimes with models, it doesn't feel like they quite know what they're talking about," Gostev said in an interview. "I really didn't expect such stark results. I thought it would be harder to come up with questions that would kind of trick them, but it was pretty much first go, and it worked."
Google doesn't get the joke
BullshitBench measures whether systems explicitly detect flawed premises, call them out clearly, and avoid building elaborate answers on nonsense foundations.
Google Gemini 3.0, lauded late last year as the best new model, performs poorly. Less than half the time, this top Google model didn't push back clearly on the bullshit.
"Reasoning" doesn't help
Gostev also found a consistent pattern across the data: extra steps taken by reasoning models don't really help. In fact, he found that reasoning models can perform worse. Instead of rejecting bad questions outright, they often try harder to reinterpret them into something answerable.
"They're not necessarily spending time to try and make sure the question makes sense, but they really try hard to make sure that they can answer the question," he said.
Capability vs judgment
That finding cuts to a deeper issue about artificial intelligence, and intelligence itself. While today's models can ace complex coding tasks and advanced math problems, they sometimes falter on what humans take for granted: basic judgment. Knowing when something is off, absurd, or ill-posed may be less about raw reasoning power and more about context, experience, and restraint.
BullshitBench hints at a gap between capability and judgment. Gostev argues that AI labs may have focused heavily on the "top end" of intelligence — hard problems with measurable answers — while paying less attention to lower-level, but crucial, cognitive checks.
Anthropic = best BS spotter
Not all AI models struggled on BullshitBench, though. Anthropic's latest systems score significantly higher, correctly rejecting nonsense most of the time.
"Anthropic has been particularly good at just having the base models perform really, really well," Gostev told me.
He thinks this could be due to Anthropic's focus on its core AI models, rather than on reasoning models that take longer to think through questions and tasks.
"I constantly see this with Anthropic models — I pretty much switch off reasoning when I do tests," he said. "Their reasoning has been weaker than, especially, OpenAI. And I think Google is a bit closer to OpenAI in that sense. But for OpenAI, if you pick a medium reasoning model, I mean, it's horrendous."
Either way, this is another example of how Anthropic's core models have outperformed arch-rival OpenAI's on several measures over the past 9 months or so.
I asked Anthropic, Google, and OpenAI about the results on Friday. They didn't respond.
Sign up for BI's Tech Memo newsletter here. Reach out to me via email at [email protected].













