Rate Limiting
Utilities for rate limiting, throttling, and debouncing function calls. Essential for managing API rate limits, preventing request spam, and optimizing performance in Ethereum applications.Overview
Three approaches to rate limiting:- Throttle: Execute at most once per time period (first call wins)
- Debounce: Execute only after calls have stopped (last call wins)
- RateLimiter: Token bucket algorithm with queuing (all calls eventually execute)
Throttle
Execute a function at most once per specified wait time.Basic Usage
Event Handlers
Throttle rapid UI events:API Reference
Debounce
Execute a function only after calls have stopped for specified wait time.Basic Usage
Cancel Debounced Calls
Form Input
Debounce search or validation:API Reference
RateLimiter
Token bucket rate limiter with queuing, rejection, or dropping strategies.Basic Usage
Configuration
- queue (default): Queue requests until capacity available
- reject: Throw error when limit exceeded
- drop: Silently drop requests when limit exceeded
Wrap Functions
Create rate-limited versions of functions:Queue Strategy
Queue requests when limit exceeded:Reject Strategy
Fail fast when limit exceeded:Drop Strategy
Silently drop requests when limit exceeded:Real-World Examples
Public RPC Rate Limiting
Respect public RPC rate limits:Multiple Rate Limits
Different limits for different endpoints:Monitoring
Track rate limiter state:Burst Handling
Allow bursts then rate limit:Comparison
| Utility | Use Case | Behavior |
|---|---|---|
| throttle | Event handlers, UI updates | First call wins, others ignored |
| debounce | Search, validation | Last call wins after pause |
| RateLimiter | API rate limits | All calls execute (queued) |
Best Practices
Choose the Right Tool
- Throttle: When only first call matters (UI updates)
- Debounce: When only last call matters (search, validation)
- RateLimiter: When all calls must execute (API requests)
Public RPC Limits
Common public RPC rate limits:- Infura: 10 req/sec (free), 100 req/sec (paid)
- Alchemy: 25 req/sec (free), 300 req/sec (growth)
- QuickNode: Varies by plan
- Public endpoints: Often 1-5 req/sec
Batch + Rate Limit
Combine with batching for optimal throughput:API Reference
throttle
debounce
RateLimiter
See Also
- Batch Processing - Combine with rate limiting
- Retry - Retry rate-limited requests
- Polling - Poll with rate limits

