Rate Limits
Understand Wrytze API rate limiting and how to handle it.
The Wrytze API enforces rate limits to ensure fair usage and protect the service from abuse. Rate limits are applied per API key using a sliding window algorithm.
Limits
| Scope | Limit | Window |
|---|---|---|
| Per API key | 100 requests | 1 minute (sliding window) |
The sliding window algorithm smooths out burst traffic. Rather than resetting at fixed intervals, it tracks requests over a rolling 60-second period.
Response Headers
Every API response includes rate limit headers so you can monitor your usage:
| Header | Description | Example |
|---|---|---|
X-RateLimit-Limit | Maximum requests allowed per window | 100 |
X-RateLimit-Remaining | Requests remaining in the current window | 94 |
X-RateLimit-Reset | Unix timestamp (ms) when the window resets | 1708920000000 |
Example response headers
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 94
X-RateLimit-Reset: 1708920000000Rate Limit Exceeded (429)
When you exceed the rate limit, the API returns a 429 Too Many Requests response with a Retry-After header indicating how many seconds to wait before retrying.
Response headers
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1708920000000
Retry-After: 34Response body
{
"error": {
"message": "Rate limit exceeded",
"status": 429
}
}Handling Rate Limits
Monitor remaining requests
Check the X-RateLimit-Remaining header on every response. If the value is getting low, consider slowing down your request rate.
async function fetchWithRateLimit(url: string, apiKey: string) {
const response = await fetch(url, {
headers: { "X-API-Key": apiKey },
});
const remaining = parseInt(
response.headers.get("X-RateLimit-Remaining") ?? "100",
10
);
if (remaining < 10) {
console.warn(`Rate limit warning: ${remaining} requests remaining`);
}
if (response.status === 429) {
const retryAfter = parseInt(
response.headers.get("Retry-After") ?? "60",
10
);
console.log(`Rate limited. Retrying in ${retryAfter} seconds...`);
await new Promise((resolve) => setTimeout(resolve, retryAfter * 1000));
return fetchWithRateLimit(url, apiKey); // Retry
}
return response;
}async function fetchWithRateLimit(url, apiKey) {
const response = await fetch(url, {
headers: { "X-API-Key": apiKey },
});
const remaining = parseInt(
response.headers.get("X-RateLimit-Remaining") ?? "100",
10
);
if (remaining < 10) {
console.warn(`Rate limit warning: ${remaining} requests remaining`);
}
if (response.status === 429) {
const retryAfter = parseInt(
response.headers.get("Retry-After") ?? "60",
10
);
console.log(`Rate limited. Retrying in ${retryAfter} seconds...`);
await new Promise((resolve) => setTimeout(resolve, retryAfter * 1000));
return fetchWithRateLimit(url, apiKey); // Retry
}
return response;
}Exponential backoff
If you hit a 429 response, use exponential backoff rather than retrying immediately. This prevents cascading failures when many clients retry simultaneously.
async function fetchWithBackoff(
url: string,
apiKey: string,
maxRetries = 3
): Promise<Response> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, {
headers: { "X-API-Key": apiKey },
});
if (response.status !== 429) {
return response;
}
if (attempt === maxRetries) {
throw new Error("Max retries exceeded");
}
// Exponential backoff: 1s, 2s, 4s
const backoff = Math.pow(2, attempt) * 1000;
const jitter = Math.random() * 1000;
await new Promise((resolve) => setTimeout(resolve, backoff + jitter));
}
throw new Error("Unreachable");
}Best Practices
Following these best practices will help you stay well within rate limits and improve performance.
Cache responses client-side. Blog content does not change frequently. Cache API responses in your application using a library like lru-cache or a CDN layer to avoid redundant API calls.
import { LRUCache } from "lru-cache";
const cache = new LRUCache<string, unknown>({
max: 500,
ttl: 1000 * 60 * 5, // 5 minutes
});
async function getCachedBlogs(apiKey: string, params: string) {
const key = `blogs:${params}`;
const cached = cache.get(key);
if (cached) return cached;
const response = await fetch(
`https://app.wrytze.com/api/v1/blogs?${params}`,
{ headers: { "X-API-Key": apiKey } }
);
const data = await response.json();
cache.set(key, data);
return data;
}Monitor the X-RateLimit-Remaining header. Log this value to detect patterns where your application is approaching the limit, and adjust your request rate proactively.
Use exponential backoff with jitter. When a 429 is received, do not retry immediately. Use the Retry-After header value as a minimum wait time, and add random jitter to spread retries across time.
Batch your requests where possible. Instead of fetching individual blogs one by one, use the list endpoint with appropriate filters to fetch multiple blogs in a single request.
Use server-side caching. If you are building a server-rendered website, implement a caching layer (Redis, Memcached, or in-memory) between your server and the Wrytze API. This is the most effective way to reduce API calls.