Rate Limits
Understanding and working with Chipp MCP Server rate limits
The Chipp MCP Server enforces rate limits to ensure fair usage and platform stability. Limits vary by subscription tier.
Rate Limits by Tier
| Tier | Requests/Minute | Burst Limit | MCP Access |
|---|---|---|---|
| FREE | 0 | 0 | Not available |
| PRO | 30 | 10 | Full access |
| TEAM | 60 | 20 | Full access |
| BUSINESS | 120 | 40 | Full access |
| ENTERPRISE | Unlimited | Unlimited | Full access |
MCP server access requires a Pro plan or higher. Free tier users cannot use the MCP server.
Understanding Burst Limits
The burst limit allows short spikes above the sustained rate. This is useful for parallel tool calls:
PRO tier example: - Sustained: 30 requests/minute (0.5 req/sec average) - Burst: Up to 10 requests in quick succession - After burst: Must wait for rate to recover
Rate Limit Headers
Every response includes rate limit information:
| Header | Description |
|---|---|
X-RateLimit-Tier | Your subscription tier |
X-RateLimit-Limit | Maximum requests per minute |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when limit resets |
Example response headers:
HTTP/1.1 200 OK X-RateLimit-Tier: PRO X-RateLimit-Limit: 30 X-RateLimit-Remaining: 25 X-RateLimit-Reset: 1704067260
Handling Rate Limits (429 Errors)
When you exceed your limit, the server returns a 429 Too Many Requests response:
{
"error": {
"code": "RATE_LIMITED",
"message": "Rate limit exceeded. Try again in 45 seconds.",
"tier": "PRO",
"limit": 30,
"reset": 1704067260
}
}Retry-After Header
The Retry-After header tells you exactly when to retry:
HTTP/1.1 429 Too Many Requests Retry-After: 45 X-RateLimit-Reset: 1704067260
Implementing Retry Logic
Basic exponential backoff:
async function callWithRetry(makeRequest, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await makeRequest();
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') || 60;
console.log(`Rate limited. Waiting ${retryAfter}s...`);
await sleep(retryAfter * 1000);
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}Respecting Retry-After:
import time
import requests
def call_mcp(tool_name, args):
response = requests.post(
"https://mcp.chipp.ai/mcp/messages",
json={"method": "tools/call", "params": {"name": tool_name, "arguments": args}},
headers={"Authorization": f"Bearer {token}"}
)
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limited. Sleeping {retry_after}s...")
time.sleep(retry_after)
return call_mcp(tool_name, args) # Retry
return response.json()Best Practices
1. Batch Operations
Instead of calling tools one at a time, combine related operations:
# Inefficient: 10 separate calls
for app in apps:
update_app(app.id, {...})
# Better: Use bulk patterns where available
# Or space out calls to stay within limits
2. Use Caching
Cache responses that don't change frequently:
const appCache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
async function getApp(appId) {
const cached = appCache.get(appId);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const app = await mcpCall('get_app', { appId });
appCache.set(appId, { data: app, timestamp: Date.now() });
return app;
}3. Monitor Your Usage
Check remaining requests before large operations:
// Check headers from any response
const remaining = parseInt(response.headers['x-ratelimit-remaining']);
const resetTime = parseInt(response.headers['x-ratelimit-reset']);
if (remaining < 5 && operations.length > 5) {
const waitTime = resetTime - Math.floor(Date.now() / 1000);
console.log(`Low on requests. Waiting ${waitTime}s before bulk operation...`);
await sleep(waitTime * 1000);
}4. Spread Requests Over Time
For non-urgent bulk operations, spread requests evenly:
async function bulkUpdate(apps, updateFn) {
const REQUESTS_PER_MINUTE = 25; // Stay under limit
const DELAY_MS = 60000 / REQUESTS_PER_MINUTE; // ~2.4 seconds
for (const app of apps) {
await updateFn(app);
await sleep(DELAY_MS);
}
}5. Use Pagination Wisely
Fetch only what you need:
# Good: Small pages, specific queries list_apps(limit=10, offset=0) search_conversations(appId="xxx", query="refund", limit=20) # Avoid: Large fetches you don't need list_apps(limit=1000) export_conversations(appId="xxx") # Use sparingly
Checking Your Current Limits
Query the rate limits endpoint:
curl https://mcp.chipp.ai/rate-limits \
-H "Authorization: Bearer chipp_at_xxxxx"Response:
{
"tier": "PRO",
"limits": {
"requestsPerMinute": 30,
"burstLimit": 10
},
"current": {
"remaining": 28,
"resetAt": "2024-01-01T12:00:00Z"
}
}Upgrading Your Limits
Need higher limits? Upgrade your subscription:
| Current Tier | Next Tier | Limit Increase |
|---|---|---|
| PRO | TEAM | 30 → 60 req/min (2x) |
| TEAM | BUSINESS | 60 → 120 req/min (2x) |
| BUSINESS | ENTERPRISE | 120 → Unlimited |
Common Scenarios
Bulk App Creation
Problem: Creating 50 apps hits rate limit.
Solution (PRO tier):
// 30 requests/minute = ~2 seconds between requests
for (const appConfig of appConfigs) {
await createApp(appConfig);
await sleep(2000); // 2 second delay
}
// Total time: ~100 seconds for 50 appsAnalytics Dashboard
Problem: Fetching analytics for 20 apps exceeds limit.
Solution:
// Option 1: Fetch in batches
const batches = chunk(appIds, 5);
for (const batch of batches) {
await Promise.all(batch.map(id => getAppAnalytics(id)));
await sleep(10000); // Wait 10s between batches
}
// Option 2: Cache results
// Analytics don't change by the second - cache for 5+ minutesCI/CD Integration
Problem: Automated deployments need multiple tool calls.
Solution:
# In your CI pipeline
steps:
- name: Update Chipp App
run: |
# Use retry logic
./update-app.sh --retry-on-rate-limit
env:
CHIPP_RETRY_DELAY: 60
CHIPP_MAX_RETRIES: 3Rate Limit FAQ
Q: Do rate limits apply per user or per organization? A: Rate limits apply per authenticated user (OAuth token). Each team member has their own limit.
Q: Does reading consume the same rate as writing? A: Yes, all tool calls count equally against the rate limit.
Q: What happens if I'm rate limited mid-workflow? A: Implement retry logic with exponential backoff. Your MCP client should handle 429 responses gracefully.
Q: Can I request higher limits for a specific use case? A: Enterprise customers can negotiate custom limits. Contact sales@chipp.ai for details.
Next Steps
- Tools Reference - All available tools
- Common Workflows - Efficient usage patterns
- Authentication - Token management