Few-Shot Learning
Teaching AI models to perform tasks by providing a small number of examples (typically 1-10) in the prompt.
What is few-shot learning?
Few-shot learning is a technique where you provide a small number of examples (usually 1-10) in your prompt to show the AI model exactly what kind of output you want.
Example:
Convert company names to stock tickers:
Company: Apple Inc.
Ticker: AAPL
Company: Microsoft Corporation
Ticker: MSFT
Company: Tesla Inc.
Ticker: TSLA
Company: Amazon.com Inc.
Ticker:
The model learns the pattern from examples and applies it to new inputs.
Terms you might hear:
- One-shot: Single example
- Few-shot: 2-10 examples
- Many-shot: 10+ examples
Why does few-shot learning work?
Pattern recognition LLMs excel at recognizing and continuing patterns. Examples establish a clear pattern to follow.
Implicit instruction Sometimes showing is more effective than telling. Examples communicate nuances that are hard to describe.
Format specification Examples demonstrate exactly what format you want—structure, length, style, tone.
Domain adaptation Examples can teach domain-specific conventions without full fine-tuning.
Disambiguation When a task could be interpreted multiple ways, examples clarify your intent.
The model doesn't "learn" in the traditional sense—it uses examples as context to condition its outputs. Each new request starts fresh; examples must be included every time.
How to write effective examples
Be representative Include examples that cover the range of inputs you expect. If inputs vary, examples should vary too.
Be consistent All examples should follow the exact same format. Inconsistent examples confuse the model.
Be diverse Show different scenarios, edge cases, or categories your task handles.
Be concise Long examples use more tokens. Trim unnecessary text while preserving essential information.
Order matters Put simpler examples first, harder ones later. End with an example similar to your actual task.
Quality over quantity 3 excellent examples often outperform 10 mediocre ones. Each example should be a perfect illustration of desired behavior.
Few-shot best practices
Start with zero-shot Try without examples first. If results are good, you don't need the extra tokens.
Add examples incrementally Start with 1-2 examples. Add more only if needed to improve results.
Test your examples Verify each example independently. A mistake in an example teaches the wrong pattern.
Use clear delimiters Separate examples clearly:
---
Input: [input1]
Output: [output1]
---
Input: [input2]
Output: [output2]
---
Input: [your actual input]
Output:
Consider example selection For production systems, dynamically select relevant examples based on the input.
Watch token limits More examples = more tokens = higher cost and potential context window issues.
Advanced few-shot techniques
Dynamic example selection Choose examples similar to the current input using embedding similarity. More relevant examples improve performance.
Chain-of-thought few-shot Include reasoning in examples, not just answers:
Q: If I have 3 apples and buy 2 more, how many do I have?
A: I start with 3 apples. I buy 2 more. 3 + 2 = 5. I have 5 apples.
Negative examples Show what NOT to do:
Good response: "I'll check that order for you."
Bad response: "I don't know, check the website."
Category coverage For classification, include at least one example per category.
Calibration examples Include examples where the answer is "I don't know" or "cannot determine" to prevent overconfident responses.
Related Terms
Zero-Shot Learning
The ability of AI models to perform tasks without any task-specific training examples, using only instructions.
Prompt Engineering
The practice of designing and refining inputs to AI models to get better, more accurate, and more useful outputs.
Chain of Thought
A prompting technique that improves AI reasoning by asking the model to show its step-by-step thinking process.