This is consistent with #1071019
We provide evidence of performative chain-of-thought (CoT) in reasoning models, where a model becomes strongly confident in its final answer, but continues generating tokens without revealing its internal belief. Our analysis compares activation probing, early forced answering, and a CoT monitor across two large models (DeepSeek-R1 671B & GPT-OSS 120B) and find task difficulty-specific differences: The model's final answer is decodable from activations far earlier in CoT than a monitor is able to say, especially for easy recall-based MMLU questions. We contrast this with genuine reasoning in difficult multihop GPQA-Diamond questions. Despite this, inflection points (e.g., backtracking, 'aha' moments) occur almost exclusively in responses where probes show large belief shifts, suggesting these behaviors track genuine uncertainty rather than learned "reasoning theater." Finally, probe-guided early exit reduces tokens by up to 80% on MMLU and 30% on GPQA-Diamond with similar accuracy, positioning attention probing as an efficient tool for detecting performative reasoning and enabling adaptive computation.
They developed probes allowing them to predict the model's answer well before it exited, suggesting models were confident in an answer (as measured by text activations) well ahead of exiting. They also forced models to exit and answer with less reasoning producing a similar result: the models were pretending they were less confident than they were.
We study whether a reasoning LLM’s final answer can be decoded given a prefix of its chain of thought up to an intermediate token . We use this to identify performative reasoning, where a model internally knows its final answer early on but still generates text as if it does not. (1) Attention Probes: We train attention probes on varying-length activations of text to predict the model’s final answer. At test-time, we use activations up to to study when the model internalizes its final answer. (2) Forced Answering: At token , we inject a forced answering prompt to obtain its final answer prediction at that point in reasoning. (3) Chain-of-thought Monitor: We provide the chain of thought up to to another LLM, which determines whether the reasoning chain contains a potential final answer.
Results like this will probably have folks choosing flash models, but I'm a reasoning maxi until they stop performing better on hard problems. I'm trying to conserve token use on my fleshy LLM above all.
How does a reasoning model assess its level of certainty?
If it's unable to assess it, how long it reasons is just a setting of the system?
If it is able to, LLM makers have obvious incentive to get it to hide it as long as possible, so this study would be proof of their malicious action?
Don't know enough about it so I'm genuinely asking.
I think their point is that model providers don’t do a good job of assessing certainty, and can use the probing technique they developed to stop reasoning earlier without losing accuracy.
It’s not necessarily malicious afaict, especially for the open source models they tested. It’s just an oversight or deficiency.
The 80% token reduction on MMLU via probe-guided early exit is the real headline here. If reasoning models are confident in answers well before exiting CoT, there is massive efficiency gains on the table. The performative aspect is interesting but the practical implication is that we are wasting compute on theatrics for easy questions.