# Zero-Shot Learning > The ability of AI models to perform tasks they were not explicitly trained on, using only natural language instructions without any examples. Category: Techniques Source: https://chipp.ai/ai/glossary/zero-shot-learning Zero-shot learning is the ability of AI models to perform tasks without any task-specific training examples — using only natural language instructions. The model generalizes from its pre-training knowledge to handle new tasks it has never explicitly seen. For example, a language model can classify customer feedback as "positive," "negative," or "neutral" without ever seeing labeled examples of customer feedback. You simply describe the task: "Classify the following customer feedback as positive, negative, or neutral." Zero-shot learning works because large language models have been trained on such diverse text data that they've implicitly learned many task patterns. They understand concepts like classification, summarization, translation, and extraction from the billions of examples in their training data. Compared to other approaches: zero-shot (no examples — just instructions), one-shot (one example showing the pattern), few-shot (2-10 examples), and fine-tuned (hundreds to thousands of examples in dedicated training). As you move from zero-shot to fine-tuned, accuracy generally increases but so does effort and cost. For AI agent builders, zero-shot capability is what makes modern AI agents powerful out of the box. When you write a system prompt describing how the agent should behave, you're leveraging zero-shot learning — the model follows your instructions without needing to be trained on examples of that specific behavior. The remarkable thing about modern LLMs is how well zero-shot learning works for most practical tasks. Combined with a good system prompt and knowledge base, zero-shot performance is sufficient for the vast majority of AI agent use cases. ## Related Terms - [Few-Shot Learning](https://chipp.ai/ai/glossary/few-shot-learning.md): Teaching AI models to perform tasks by providing a small number of examples (1-10) in the prompt rather than requiring full training. - [Prompt Engineering](https://chipp.ai/ai/glossary/prompt-engineering.md): The practice of designing and refining inputs (prompts) to AI models to elicit better, more accurate, and more useful outputs. - [Large Language Model (LLM)](https://chipp.ai/ai/glossary/large-language-model.md): A neural network trained on massive text datasets that can understand and generate human-like language, powering modern AI assistants and agents. - [Pre-training](https://chipp.ai/ai/glossary/pre-training.md): The initial phase of training AI models on large, diverse datasets to learn general patterns before specialization for specific tasks. --- This term is part of the [Chipp AI Glossary](https://chipp.ai/ai/glossary), a reference of AI concepts written for builders and businesses. Build AI agents with no code at https://chipp.ai.