Build the next Canva for video editingBook now

Rate Limiting

Understand API rate limits and how to handle them in your applications.

Rate Limiting

The DesignCombo API implements rate limiting to ensure fair usage and maintain service quality across all users.

Rate Limits by Plan

PlanRequests per HourRequests per MinuteConcurrent Jobs
Free100102
Pro1,00010010
EnterpriseCustomCustomCustom

Rate Limit Headers

The API includes rate limit information in response headers:

X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 999
X-RateLimit-Reset: 1640995200
X-RateLimit-Reset-Time: 2024-01-01T12:00:00Z
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the time window
X-RateLimit-RemainingNumber of requests remaining in the current window
X-RateLimit-ResetUnix timestamp when the rate limit resets
X-RateLimit-Reset-TimeHuman-readable time when the rate limit resets

Checking Your Rate Limits

You can check your current rate limit status by examining the response headers:

const response = await fetch('https://api.designcombo.dev/v1/text-to-speech', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
text: 'Hello world',
voiceId: '69nXRvRvFpjSXhH7IM5l'
})
});
console.log('Rate limit remaining:', response.headers.get('X-RateLimit-Remaining'));
console.log('Rate limit resets at:', response.headers.get('X-RateLimit-Reset-Time'));

Rate Limit Exceeded

When you exceed your rate limit, the API returns a 429 Too Many Requests status code:

{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Try again in 3600 seconds.",
"details": {
"limit": 1000,
"remaining": 0,
"reset": 1640995200,
"reset_time": "2024-01-01T12:00:00Z"
}
}
}

Handling Rate Limits

Implement exponential backoff in your applications to handle rate limits gracefully:

const delay = (ms) => new Promise(resolve => setTimeout(resolve, ms));
const makeRequestWithRetry = async (apiCall, maxRetries = 3) => {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const response = await apiCall();
if (response.status === 429) {
const resetTime = response.headers.get('X-RateLimit-Reset');
const waitTime = Math.max(1000, (resetTime * 1000) - Date.now());
console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
await delay(waitTime);
continue;
}
return response;
} catch (error) {
if (error.status === 429 && attempt < maxRetries) {
const waitTime = Math.pow(2, attempt) * 1000;
console.log(`Rate limited. Retrying in ${waitTime}ms...`);
await delay(waitTime);
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
};

Python Example

import time
import requests
from datetime import datetime
def make_request_with_retry(api_call, max_retries=3):
for attempt in range(1, max_retries + 1):
try:
response = api_call()
if response.status_code == 429:
reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
wait_time = max(1, reset_time - int(time.time()))
print(f"Rate limited. Waiting {wait_time}s before retry...")
time.sleep(wait_time)
continue
return response
except requests.exceptions.RequestException as e:
if hasattr(e, 'response') and e.response.status_code == 429 and attempt < max_retries:
wait_time = 2 ** attempt
print(f"Rate limited. Retrying in {wait_time}s...")
time.sleep(wait_time)
continue
raise e
raise Exception("Max retries exceeded")

Best Practices

1. Monitor Rate Limits

Always check the rate limit headers to understand your usage:

const checkRateLimit = (response) => {
const remaining = response.headers.get('X-RateLimit-Remaining');
const resetTime = response.headers.get('X-RateLimit-Reset-Time');
if (parseInt(remaining) < 10) {
console.warn(`Low rate limit remaining: ${remaining}. Resets at ${resetTime}`);
}
};

2. Implement Caching

Cache responses to reduce API calls:

const cache = new Map();
const cachedRequest = async (url, options) => {
const cacheKey = `${url}-${JSON.stringify(options)}`;
if (cache.has(cacheKey)) {
const { data, timestamp } = cache.get(cacheKey);
if (Date.now() - timestamp < 300000) { // 5 minutes
return data;
}
}
const response = await fetch(url, options);
const data = await response.json();
cache.set(cacheKey, {
data,
timestamp: Date.now()
});
return data;
};

3. Batch Requests

When possible, batch multiple operations into a single request:

// Instead of multiple requests
const results = [];
for (const text of texts) {
const result = await textToSpeech(text);
results.push(result);
}
// Use batch endpoint
const batchResult = await batchTextToSpeech(texts);

4. Queue Requests

For high-volume applications, implement a request queue:

class RequestQueue {
constructor(maxConcurrent = 5) {
this.queue = [];
this.running = 0;
this.maxConcurrent = maxConcurrent;
}
async add(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.process();
});
}
async process() {
if (this.running >= this.maxConcurrent || this.queue.length === 0) {
return;
}
this.running++;
const { requestFn, resolve, reject } = this.queue.shift();
try {
const result = await requestFn();
resolve(result);
} catch (error) {
reject(error);
} finally {
this.running--;
this.process();
}
}
}

Upgrading Your Plan

If you consistently hit rate limits, consider upgrading your plan:

  1. 1Monitor your usage in the dashboard
  2. 2Identify peak usage times and optimize
  3. 3Contact support for custom rate limits
  4. 4Upgrade to Pro or Enterprise for higher limits
Need help with rate limiting?
  • Check your current usage in the dashboard
  • Implement exponential backoff in your applications
  • Consider upgrading your plan for higher limits
  • Contact support for custom rate limit solutions