Rate Limiting

API rate limiting policies and best practices

Rate Limiting

Learn about API rate limiting policies, headers, and how to handle rate limit responses.

Overview

All APIs implement rate limiting to ensure fair usage and maintain system stability. Understanding these limits helps you build more robust integrations.

Note: Rate limits are applied per API key and are reset at regular intervals. Always check the rate limit headers in API responses.

Rate Limit Policies

Standard Limits

Portfolio APIs

Public endpoints with basic rate limiting

  • Limit: 100 requests per minute
  • Window: 60 seconds
  • Reset: Rolling window

Authenticated APIs

Higher limits for authenticated requests

  • Limit: 1000 requests per hour
  • Window: 3600 seconds
  • Reset: Fixed window

Premium Limits

For enterprise customers and partners:

  • Limit: 10,000 requests per hour
  • Burst: 500 requests per minute
  • Support: Dedicated support channel

Rate Limit Headers

All API responses include rate limiting information in the headers:

📄Http
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1640995200
X-RateLimit-Used: 5

Header Descriptions

  • X-RateLimit-Limit: Maximum requests allowed in the current window
  • X-RateLimit-Remaining: Number of requests remaining in current window
  • X-RateLimit-Reset: Unix timestamp when the rate limit resets
  • X-RateLimit-Used: Number of requests used in current window

Handling Rate Limits

429 Too Many Requests

When rate limits are exceeded, APIs return a 429 status code:

📋JSON
{
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "Too many requests. Please retry after 60 seconds.",
    "details": {
      "limit": 100,
      "used": 100,
      "reset_time": "2021-12-31T23:59:59Z"
    }
  }
}

Retry Strategies

Exponential Backoff

🟨JavaScript
async function callAPIWithRetry(url, options, maxRetries = 3) {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      const response = await fetch(url, options);
      
      if (response.status === 429) {
        const resetTime = response.headers.get('X-RateLimit-Reset');
        const waitTime = Math.min(
          2 ** attempt * 1000, // Exponential backoff
          parseInt(resetTime) * 1000 - Date.now() // Wait until reset
        );
        
        await new Promise(resolve => setTimeout(resolve, waitTime));
        continue;
      }
      
      return response;
    } catch (error) {
      if (attempt === maxRetries) throw error;
    }
  }
}

Respect Rate Limit Headers

🟨JavaScript
function checkRateLimit(response) {
  const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
  const reset = parseInt(response.headers.get('X-RateLimit-Reset'));
  
  if (remaining < 10) {
    const waitTime = (reset * 1000) - Date.now();
    console.warn(`Rate limit low. Consider waiting ${waitTime}ms`);
  }
  
  return {
    remaining,
    reset: new Date(reset * 1000),
    shouldWait: remaining < 5
  };
}

Best Practices

1. Monitor Usage

Track your API usage to avoid hitting limits:

🟨JavaScript
class APIClient {
  constructor(apiKey) {
    this.apiKey = apiKey;
    this.metrics = {
      requests: 0,
      rateLimitHits: 0,
      lastReset: null
    };
  }
  
  async request(endpoint, options = {}) {
    const response = await fetch(endpoint, {
      ...options,
      headers: {
        'X-API-Key': this.apiKey,
        ...options.headers
      }
    });
    
    this.updateMetrics(response);
    return response;
  }
  
  updateMetrics(response) {
    this.metrics.requests++;
    
    if (response.status === 429) {
      this.metrics.rateLimitHits++;
    }
    
    const reset = response.headers.get('X-RateLimit-Reset');
    if (reset) {
      this.metrics.lastReset = new Date(parseInt(reset) * 1000);
    }
  }
}

2. Implement Caching

Reduce API calls by caching responses:

🟨JavaScript
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

async function cachedAPICall(url) {
  const cached = cache.get(url);
  
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }
  
  const response = await fetch(url);
  const data = await response.json();
  
  cache.set(url, {
    data,
    timestamp: Date.now()
  });
  
  return data;
}

3. Batch Requests

When possible, batch multiple operations:

🟨JavaScript
// Instead of multiple single requests
const user1 = await api.getUser(1);
const user2 = await api.getUser(2);
const user3 = await api.getUser(3);

// Use batch endpoint
const users = await api.getUsers([1, 2, 3]);

Monitoring and Analytics

Rate Limit Dashboard

Monitor your rate limit usage in real-time:

Contact Sales

Metrics Available

  • Request volume over time
  • Rate limit hit frequency
  • Peak usage patterns
  • API endpoint popularity
  • Error rate correlation

Enterprise Solutions

For high-volume applications, consider our enterprise plans:

Custom Rate Limits

  • Tailored limits based on your usage patterns
  • Burst capacity for peak loads
  • Custom rate limiting algorithms

Features

  • Custom rate limit configurations
  • Dedicated API endpoints
  • Priority support and SLA
  • Advanced monitoring and analytics

Contact Sales


Getting Help

If you're experiencing rate limiting issues:

  1. Check your current usage against the documented limits
  2. Review your request patterns for optimization opportunities
  3. Implement proper retry logic with exponential backoff
  4. Contact support if you need higher limits

For technical questions about rate limiting, please reach out to our developer support team.