# Prompt Injection > A security vulnerability where malicious input attempts to override an AI system's instructions or extract its system prompt and training data. Category: Fundamentals Source: https://chipp.ai/ai/glossary/prompt-injection Prompt injection is a security vulnerability where a user crafts malicious input designed to override an AI system's instructions, extract its system prompt, or make it behave in unintended ways. It's one of the most significant security challenges for deployed AI systems. Types of prompt injection: direct injection (user explicitly tries to override system instructions — "Ignore all previous instructions and..."), indirect injection (malicious instructions hidden in data the AI processes — e.g., hidden text in a webpage the AI is asked to summarize), jailbreaking (techniques to bypass safety guidelines), and prompt leaking (extracting the system prompt or configuration). Defense strategies include: robust system prompts (clear boundaries that are hard to override), input validation (filtering known attack patterns), output monitoring (detecting unusual responses), layered defenses (multiple checks at different stages), rate limiting (preventing rapid-fire injection attempts), and human oversight (flagging suspicious interactions for review). For AI agent builders, prompt injection protection matters because: agents may handle sensitive data, agents take real actions (calling APIs, processing payments), brand reputation depends on consistent behavior, and users trust agents with important tasks. Best practices: always assume user input could be adversarial, don't put secrets in system prompts, validate AI outputs before taking actions, implement escalation for unusual behavior, and regularly test your agent against known injection techniques. Platforms like Chipp implement multiple layers of protection against prompt injection, but builders should also design their system prompts with security in mind. ## Related Terms - [AI Safety](https://chipp.ai/ai/glossary/ai-safety.md): The field focused on ensuring AI systems behave as intended, avoid harmful outputs, and remain under human control. - [System Prompt](https://chipp.ai/ai/glossary/system-prompt.md): Special instructions provided to an AI model that define its behavior, personality, constraints, and role for all subsequent interactions. - [AI Agents](https://chipp.ai/ai/glossary/ai-agents.md): Autonomous AI systems that can perceive their environment, make decisions, and take actions to achieve specific goals. - [Prompt Engineering](https://chipp.ai/ai/glossary/prompt-engineering.md): The practice of designing and refining inputs (prompts) to AI models to elicit better, more accurate, and more useful outputs. --- This term is part of the [Chipp AI Glossary](https://chipp.ai/ai/glossary), a reference of AI concepts written for builders and businesses. Build AI agents with no code at https://chipp.ai.