Rate Limit Tiers
| Tier | Requests/minute | Burst |
|---|---|---|
| Default | 60 | 100 |
| Pro | 300 | 500 |
| Enterprise | Unlimited | — |
Rate Limit Headers
Every response includes:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1705312800
When rate limited, you receive 429 Too Many Requests:
JSON
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests. Retry after 2025-01-15T10:31:00Z",
"retry_after": 1705312260
}
}Implementing Retry Logic
PYTHON
import time
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def create_session_with_retries():
session = requests.Session()
retry = Retry(
total=5,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504],
respect_retry_after_header=True,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount("https://", adapter)
return session
session = create_session_with_retries()
response = session.get(
"https://api.lightyear.host/v1/servers",
headers={"Authorization": "Bearer your-api-key"},
)Pagination Best Practices
PYTHON
def list_all_servers(client):
"""Fetch all servers across all pages."""
servers = []
page = 1
while True:
resp = client.get(f"/v1/servers?page={page}&per_page=100")
data = resp.json()
servers.extend(data["data"])
if len(data["data"]) < 100:
break
page += 1
return serversIdempotency Keys
For POST requests, use idempotency keys to safely retry failed requests:
>_BASH
$curl -X POST "https://api.lightyear.host/v1/servers" -H "Authorization: Bearer <API_KEY>" -H "Idempotency-Key: $(uuidgen)" -d '{"region":"hkg","plan":"gpu-a100-80",...}'Caching
Cache read-heavy data locally:
PYTHON
import functools
import time
@functools.lru_cache(maxsize=128)
def get_plans_cached():
"""Cache plan list for 5 minutes."""
return client.plans.list()
# Invalidate cache after 5 minutes
# Use TTL cache libraries like cachetools for production