Be a part of the occasion trusted by enterprise leaders for practically 20 years. VB Remodel brings collectively the folks constructing actual enterprise AI technique. Study extra
Apple’s machine-learning group set off a rhetorical firestorm earlier this month with its launch of “The Phantasm of Pondering,” a 53-page analysis paper arguing that so-called massive reasoning fashions (LRMs) or reasoning massive language fashions (reasoning LLMs) reminiscent of OpenAI’s “o” collection and Google’s Gemini-2.5 Professional and Flash Pondering don’t really interact in impartial “considering” or “reasoning” from generalized first ideas realized from their coaching information.
As a substitute, the authors contend, these reasoning LLMs are literally performing a sort of “sample matching” and their obvious reasoning capacity appears to disintegrate as soon as a activity turns into too complicated, suggesting that their structure and efficiency is just not a viable path to bettering generative AI to the purpose that it’s synthetic generalized intelligence (AGI), which OpenAI defines as a mannequin that outperforms people at most economically useful work, or superintelligence, AI even smarter than human beings can comprehend.
ACT NOW: Come talk about the newest LLM advances and analysis at VB Remodel on June 24-25 in SF — restricted tickets accessible. REGISTER NOW
Unsurprisingly, the paper instantly circulated extensively among the many machine studying neighborhood on X and plenty of readers’ preliminary reactions had been to declare that Apple had successfully disproven a lot of the hype round this class of AI: “Apple simply proved AI ‘reasoning’ fashions like Claude, DeepSeek-R1, and o3-mini don’t really cause in any respect,” declared Ruben Hassid, creator of EasyGen, an LLM-driven LinkedIn put up auto writing instrument. “They only memorize patterns rather well.”
However now at present, a brand new paper has emerged, the cheekily titled “The Phantasm of The Phantasm of Pondering” — importantly, co-authored by a reasoning LLM itself, Claude Opus 4 and Alex Lawsen, a human being and impartial AI researcher and technical author — that features many criticisms from the bigger ML neighborhood in regards to the paper and successfully argues that the methodologies and experimental designs the Apple Analysis staff used of their preliminary work are essentially flawed.
Whereas we right here at VentureBeat aren’t ML researchers ourselves and never ready to say the Apple Researchers are mistaken, the talk has actually been a full of life one and the problem in regards to the capabilities of LRMs or reasoner LLMs in comparison with human considering appears removed from settled.
How the Apple Analysis examine was designed — and what it discovered
Utilizing 4 basic planning issues — Tower of Hanoi, Blocks World, River Crossing and Checkers Leaping — Apple’s researchers designed a battery of duties that compelled reasoning fashions to plan a number of strikes forward and generate full options.
These video games had been chosen for his or her lengthy historical past in cognitive science and AI analysis and their capacity to scale in complexity as extra steps or constraints are added. Every puzzle required the fashions to not simply produce an accurate last reply, however to clarify their considering alongside the way in which utilizing chain-of-thought prompting.
Because the puzzles elevated in issue, the researchers noticed a constant drop in accuracy throughout a number of main reasoning fashions. In probably the most complicated duties, efficiency plunged to zero. Notably, the size of the fashions’ inner reasoning traces—measured by the variety of tokens spent considering by way of the issue—additionally started to shrink. Apple’s researchers interpreted this as an indication that the fashions had been abandoning problem-solving altogether as soon as the duties turned too laborious, primarily “giving up.”
The timing of the paper’s launch, simply forward of Apple’s annual Worldwide Builders Convention (WWDC), added to the influence. It shortly went viral throughout X, the place many interpreted the findings as a high-profile admission that current-generation LLMs are nonetheless glorified autocomplete engines, not general-purpose thinkers. This framing, whereas controversial, drove a lot of the preliminary dialogue and debate that adopted.
Critics take goal on X
Among the many most vocal critics of the Apple paper was ML researcher and X person @scaling01 (aka “Lisan al Gaib”), who posted a number of threads dissecting the methodology.
In one extensively shared put up, Lisan argued that the Apple staff conflated token price range failures with reasoning failures, noting that “all fashions could have 0 accuracy with greater than 13 disks just because they can not output that a lot!”
For puzzles like Tower of Hanoi, he emphasised, the output dimension grows exponentially, whereas the LLM context home windows stay mounted, writing “simply because Tower of Hanoi requires exponentially extra steps than the opposite ones, that solely require quadratically or linearly extra steps, doesn’t imply Tower of Hanoi is harder” and convincingly confirmed that fashions like Claude 3 Sonnet and DeepSeek-R1 typically produced algorithmically appropriate methods in plain textual content or code—but had been nonetheless marked mistaken.
One other put up highlighted that even breaking the duty down into smaller, decomposed steps worsened mannequin efficiency—not as a result of the fashions failed to grasp, however as a result of they lacked reminiscence of earlier strikes and technique.
“The LLM wants the historical past and a grand technique,” he wrote, suggesting the actual drawback was context-window dimension somewhat than reasoning.
I raised one other vital grain of salt myself on X: Apple by no means benchmarked the mannequin efficiency in opposition to human efficiency on the identical duties. “Am I lacking it, or did you not examine LRMs to human perf(ormance) on (the) identical duties?? If not, how have you learnt this identical drop-off in perf doesn’t occur to folks, too?” I requested the researchers instantly in a thread tagging the paper’s authors. I additionally emailed them about this and plenty of different questions, however they’ve but to reply.
Others echoed that sentiment, noting that human drawback solvers additionally falter on lengthy, multistep logic puzzles, particularly with out pen-and-paper instruments or reminiscence aids. With out that baseline, Apple’s declare of a basic “reasoning collapse” feels ungrounded.
A number of researchers additionally questioned the binary framing of the paper’s title and thesis—drawing a tough line between “sample matching” and “reasoning.”
Alexander Doria aka Pierre-Carl Langlais, an LLM coach at vitality environment friendly French AI startup Pleias, mentioned the framing misses the nuance, arguing that fashions may be studying partial heuristics somewhat than merely matching patterns.
Okay I suppose I’ve to undergo that Apple paper.
My principal problem is the framing which is tremendous binary: “Are these fashions able to generalizable reasoning, or are they leveraging completely different types of sample matching?” Or what in the event that they solely caught real but partial heuristics. pic.twitter.com/GZE3eG7WlM
— Alexander Doria (@Dorialexander) June 8, 2025
Ethan Mollick, the AI centered professor at College of Pennsylvania’s Wharton College of Enterprise, referred to as the concept LLMs are “hitting a wall” untimely, likening it to comparable claims about “mannequin collapse” that didn’t pan out.
In the meantime, critics like @arithmoquine had been extra cynical, suggesting that Apple—behind the curve on LLMs in comparison with rivals like OpenAI and Google—may be attempting to decrease expectations,” developing with analysis on “the way it’s all faux and homosexual and doesn’t matter anyway” they quipped, declaring Apple’s fame with now poorly performing AI merchandise like Siri.
Briefly, whereas Apple’s examine triggered a significant dialog about analysis rigor, it additionally uncovered a deep rift over how a lot belief to put in metrics when the check itself may be flawed.
A measurement artifact, or a ceiling?
In different phrases, the fashions might have understood the puzzles however ran out of “paper” to put in writing the total answer.
“Token limits, not logic, froze the fashions,” wrote Carnegie Mellon researcher Rohan Paul in a extensively shared thread summarizing the follow-up checks.
But not everybody is able to clear LRMs of the cost. Some observers level out that Apple’s examine nonetheless revealed three efficiency regimes — easy duties the place added reasoning hurts, mid-range puzzles the place it helps, and high-complexity circumstances the place each normal and “considering” fashions crater.
Others view the talk as company positioning, noting that Apple’s personal on-device “Apple Intelligence” fashions path rivals on many public leaderboards.
The rebuttal: “The Phantasm of the Phantasm of Pondering”
In response to Apple’s claims, a brand new paper titled “The Phantasm of the Phantasm of Pondering” was launched on arXiv by impartial researcher and technical author Alex Lawsen of the nonprofit Open Philanthropy, in collaboration with Anthropic’s Claude Opus 4.
The paper instantly challenges the unique examine’s conclusion that LLMs fail because of an inherent lack of ability to cause at scale. As a substitute, the rebuttal presents proof that the noticed efficiency collapse was largely a by-product of the check setup—not a real restrict of reasoning functionality.
Lawsen and Claude display that most of the failures within the Apple examine stem from token limitations. For instance, in duties like Tower of Hanoi, the fashions should print exponentially many steps — over 32,000 strikes for simply 15 disks — main them to hit output ceilings.
The rebuttal factors out that Apple’s analysis script penalized these token-overflow outputs as incorrect, even when the fashions adopted an accurate answer technique internally.
The authors additionally spotlight a number of questionable activity constructions within the Apple benchmarks. A few of the River Crossing puzzles, they be aware, are mathematically unsolvable as posed, and but mannequin outputs for these circumstances had been nonetheless scored. This additional calls into query the conclusion that accuracy failures characterize cognitive limits somewhat than structural flaws within the experiments.
To check their concept, Lawsen and Claude ran new experiments permitting fashions to offer compressed, programmatic solutions. When requested to output a Lua operate that would generate the Tower of Hanoi answer—somewhat than writing each step line-by-line—fashions abruptly succeeded on way more complicated issues. This shift in format eradicated the collapse solely, suggesting that the fashions didn’t fail to cause. They merely failed to evolve to a man-made and overly strict rubric.
Why it issues for enterprise decision-makers
The back-and-forth underscores a rising consensus: analysis design is now as vital as mannequin design.
Requiring LRMs to enumerate each step might check their printers greater than their planners, whereas compressed codecs, programmatic solutions or exterior scratchpads give a cleaner learn on precise reasoning capacity.
The episode additionally highlights sensible limits builders face as they ship agentic techniques—context home windows, output budgets and activity formulation could make or break user-visible efficiency.
For enterprise technical resolution makers constructing purposes atop reasoning LLMs, this debate is greater than tutorial. It raises crucial questions on the place, when, and how one can belief these fashions in manufacturing workflows—particularly when duties contain lengthy planning chains or require exact step-by-step output.
If a mannequin seems to “fail” on a posh immediate, the issue might not lie in its reasoning capacity, however in how the duty is framed, how a lot output is required, or how a lot reminiscence the mannequin has entry to. That is notably related for industries constructing instruments like copilots, autonomous brokers, or decision-support techniques, the place each interpretability and activity complexity will be excessive.
Understanding the constraints of context home windows, token budgets, and the scoring rubrics utilized in analysis is crucial for dependable system design. Builders might have to contemplate hybrid options that externalize reminiscence, chunk reasoning steps, or use compressed outputs like features or code as a substitute of full verbal explanations.
Most significantly, the paper’s controversy is a reminder that benchmarking and real-world utility aren’t the identical. Enterprise groups must be cautious of over-relying on artificial benchmarks that don’t mirror sensible use circumstances—or that inadvertently constrain the mannequin’s capacity to display what it is aware of.
In the end, the large takeaway for ML researchers is that earlier than proclaiming an AI milestone—or obituary—ensure that the check itself isn’t placing the system in a field too small to suppose inside.
Day by day insights on enterprise use circumstances with VB Day by day
If you wish to impress your boss, VB Day by day has you lined. We provide the inside scoop on what firms are doing with generative AI, from regulatory shifts to sensible deployments, so you’ll be able to share insights for max ROI.
Thanks for subscribing. Try extra VB newsletters right here.
An error occured.