# Fine-tuning > The process of further training a pre-trained AI model on a specific, smaller dataset to specialize it for particular tasks or domains. Category: Techniques Source: https://chipp.ai/ai/glossary/fine-tuning Fine-tuning is the process of taking a pre-trained AI model and training it further on a specific dataset to improve its performance on particular tasks or domains. Think of it as specializing a generalist — the pre-trained model has broad knowledge, and fine-tuning sharpens it for your specific use case. The fine-tuning process involves: preparing a dataset of input-output pairs that demonstrate desired behavior, running additional training passes on this dataset, and evaluating the model's performance on held-out test data. The model adjusts its parameters to better match the patterns in your dataset while retaining its general capabilities. When fine-tuning makes sense: you need very specific output formats consistently, domain-specific terminology or knowledge is required, you want to reduce prompt length (baking instructions into the model), latency matters (fine-tuned models can be faster than long prompts), and you have enough quality training data (typically hundreds to thousands of examples). When fine-tuning is NOT needed: for most AI agent use cases, RAG (Retrieval-Augmented Generation) combined with good system prompts achieves excellent results without fine-tuning. Fine-tuning is expensive, requires technical expertise, and creates maintenance burden (re-tuning when the base model updates). For AI agent builders, the hierarchy of customization is: system prompt (easiest, handles most cases) > RAG/knowledge base (adds domain knowledge) > few-shot examples (establishes patterns) > fine-tuning (last resort for very specific needs). Most successful agents on platforms like Chipp use the first three without needing fine-tuning. ## Related Terms - [Pre-training](https://chipp.ai/ai/glossary/pre-training.md): The initial phase of training AI models on large, diverse datasets to learn general patterns before specialization for specific tasks. - [Large Language Model (LLM)](https://chipp.ai/ai/glossary/large-language-model.md): A neural network trained on massive text datasets that can understand and generate human-like language, powering modern AI assistants and agents. - [Retrieval-Augmented Generation (RAG)](https://chipp.ai/ai/glossary/retrieval-augmented-generation.md): A technique that enhances AI responses by retrieving relevant information from external knowledge sources before generating an answer. - [Few-Shot Learning](https://chipp.ai/ai/glossary/few-shot-learning.md): Teaching AI models to perform tasks by providing a small number of examples (1-10) in the prompt rather than requiring full training. --- This term is part of the [Chipp AI Glossary](https://chipp.ai/ai/glossary), a reference of AI concepts written for builders and businesses. Build AI agents with no code at https://chipp.ai.