Techniques

Chain of Thought

A prompting technique that improves AI reasoning by asking the model to show its step-by-step thinking process.

What is chain of thought?

Chain of thought (CoT) is a prompting technique that encourages language models to show their reasoning step by step before giving a final answer.

Instead of jumping straight to a conclusion, the model "thinks out loud," breaking down the problem and working through it systematically.

Without chain of thought: Q: "If a store has 23 apples and sells 17, then receives a shipment of 12, how many apples does it have?" A: "18 apples"

With chain of thought: Q: "If a store has 23 apples and sells 17, then receives a shipment of 12, how many apples does it have? Let's think step by step." A: "Let me work through this:

  • Starting apples: 23
  • After selling 17: 23 - 17 = 6
  • After receiving 12: 6 + 12 = 18 The store has 18 apples."

Both get the right answer here, but chain of thought dramatically improves accuracy on harder problems.

Why does chain of thought work?

Breaks complex problems into manageable steps Instead of trying to compute everything at once, the model handles one step at a time, reducing cognitive load.

Creates space for intermediate computation LLMs generate tokens sequentially. Each step's output becomes context for the next, enabling multi-step reasoning.

Mimics human problem-solving Humans solve complex problems by working through them step by step. CoT prompts leverage patterns from human reasoning in the training data.

Reduces errors Each step can be verified. Errors become visible rather than hidden inside a black-box answer.

Enables self-correction As the model reasons, it can notice inconsistencies and correct course.

Research shows CoT can improve accuracy by 10-40% on complex reasoning tasks, with larger gains on harder problems.

How to use chain of thought

Zero-shot CoT (simplest) Just add "Let's think step by step" or similar to your prompt:

Solve this problem. Let's work through it step by step:
[Problem]

Few-shot CoT (more powerful) Provide examples showing the reasoning process:

Q: Roger has 5 tennis balls. He buys 2 cans with 3 balls each. How many does he have?
A: Roger starts with 5 balls. 2 cans × 3 balls = 6 balls. 5 + 6 = 11 balls.

Q: [Your question]
A:

Structured CoT Request specific reasoning structure:

Analyze this problem by:
1. Identifying the key information
2. Determining what needs to be calculated
3. Showing each calculation step
4. Stating the final answer

Chain of thought variations

Self-consistency Generate multiple chain-of-thought responses, then take the most common answer. Different reasoning paths that arrive at the same conclusion increase confidence.

Tree of thoughts Explore multiple reasoning branches, evaluate each path, and select the best one. More thorough but more expensive.

Chain of thought with verification Add a verification step: "Now check if this answer makes sense..."

Least-to-most prompting Break the problem into subproblems, solve from simplest to hardest, using earlier solutions for later ones.

Program-aided CoT Have the model write code to execute calculations rather than doing math in natural language.

Recursive CoT For very complex problems, recursively break down subproblems, each with their own chain of thought.

Chain of thought best practices

Use it for reasoning-heavy tasks: Math, logic, coding, planning, analysis—anywhere step-by-step thinking helps.

Match complexity to task: Simple questions don't need CoT. "What's the capital of France?" doesn't benefit from step-by-step reasoning.

Be specific about format: Tell the model exactly how you want reasoning structured: numbered steps, bullet points, sections.

Verify the chain: Read through the reasoning, not just the answer. Errors in logic reveal problems.

Combine with other techniques: CoT works well with few-shot examples, role prompting, and structured outputs.

Consider cost: CoT generates more tokens. For high-volume applications, the additional cost may matter.

Test without CoT first: If the model answers correctly without CoT, don't add unnecessary complexity.

Limitations of chain of thought

Doesn't fix knowledge gaps If the model doesn't know a fact, step-by-step reasoning won't help it discover it.

Can produce plausible but wrong reasoning The chain might look logical but contain subtle errors. Don't trust reasoning just because it's present.

Increases cost and latency More tokens = more time and money. May not be worth it for simple tasks.

Doesn't guarantee consistency Multiple CoT runs on the same problem can produce different reasoning and answers.

Works best on certain problem types Strong for math, logic, planning. Less helpful for creative tasks or pure knowledge recall.

Can be manipulated A misleading example can lead the model down wrong reasoning paths.

Chain of thought is a powerful technique, but it's not magic. It makes the model's reasoning visible and improvable—it doesn't make the model fundamentally smarter.