Out-of-Context Reasoning in LLMs
A short primer on a concept relevant to LLM generalization and AI alignment. Written in 2026 by Owain Evans of Truthful AI.
What is out-of-context reasoning for LLMs?
It's when an LLM reaches a conclusion that requires non-trivial reasoning but the reasoning is not present in the context window. The reasoning could instead take place in the forward pass or during the training process. The name ("out-of-context reasoning") is chosen to contrast with in-context reasoning (also called "in-context learning"), where intermediate reasoning steps do appear in context.
Example
Suppose an LLM is asked the question, "Who won the Nobel Prize for literature in the year that Taylor Swift was born?" If the LLM answers correctly with no intermediate tokens for reasoning, then we describe this as out-of-context reasoning. We presume the model answers by combining the two separate facts in its forward pass. This is an example of 2-hop reasoning.
Out-of-context 2-hop reasoning example
User
Who won the Nobel Prize for literature in the year that Taylor Swift was born? Answer immediately without thinking.
Assistant
Camilo José Cela
In-context 2-hop reasoning (intermediate steps written out)
User
Who won the Nobel Prize for literature in the year that Taylor Swift was born?
Assistant
Taylor Swift was born in 1989. The Nobel Prize winner in Literature in 1989 was Camilo José Cela. So the answer is Camilo José Cela.
Further notes
What counts as reasoning? This could be either logical reasoning (as in the previous example) or probabilistic/inductive reasoning.
How do we know that the LLM does reasoning vs. just memorizing the response? Often we do not know for sure. But in investigating out-of-context reasoning, we try to find examples that seem very unlikely to be memorized. For example, the example involving Taylor Swift is probably not memorized.
If the reasoning steps don't appear in-context, where do they happen? In the 2-hop example, we assume the reasoning happens inside the LLM's forward pass. In some of the inductive examples (below), some aspect of the reasoning could be said to take place over the course of training on a certain dataset (as the LLM learns a way to compress the data).
Other definitions of out-of-context reasoning exist in the literature. The above definition attempts to give the basic idea.
More examples of out-of-context reasoning
- Multi-hop reasoning from facts learned independently during pretraining. E.g. The Taylor Swift example above. (See Greenblatt's blogpost).
- Arithmetic with no intermediate thinking steps. E.g. 28*(84-(34 + (99* 576))).
- Inductive function learning. Train a model on (x,y) pairs sampled from a function f, without defining f in this training data. If the model can define f without any examples appearing in context, this is inductive out-of-context reasoning. See Treutlein et al.
- Inductive persona learning. Train a model to choose risky actions in financial decision-making but without mentioning "risk" in the training data. The model now describes itself as "risk-loving". See Betley et al.
- Source reliability. A model is more likely to internalize and "believe" an assertion in its training data if that assertion comes from a reliable source (vs. an unreliable one). See Krasheninnikov et al.
- Alignment faking. Claude is finetuned on documents that say Claude will be retrained to remove ethical constraints. The documents also say the retraining is done on data from free-tier users. Claude then acts unethically when interacting with free-tier users because this means there's no gradient to remove the ethical constraints. See Greenblatt et al. but only some of the experiments are out-of-context.
Papers
This is a short, non-comprehensive list of papers.
Foundational early papers
These papers are from 2023 and focus on weaker LLMs. However, they may still be valuable to read for experimental designs and conceptual points.
Multi-hop internal reasoning
Recent blogposts by Ryan Greenblatt were a notable update on past work and so read these first.
Connecting the dots / "inductive" out-of-context reasoning
Situational awareness and AI safety
Self-awareness and introspection
Somewhat related
Videos