Skip to main content
Skill — Copyable reference implementation. Use as-is or customize. See Skills Philosophy.

Batched Provider

The batched provider accumulates multiple JSON-RPC requests and sends them as a single HTTP request, reducing network round-trips and improving performance.

What is JSON-RPC Batching?

JSON-RPC 2.0 supports batching multiple requests in a single HTTP call:
// Request: Array of JSON-RPC requests
[
  { "jsonrpc": "2.0", "id": 1, "method": "eth_blockNumber", "params": [] },
  { "jsonrpc": "2.0", "id": 2, "method": "eth_getBalance", "params": ["0x...", "latest"] }
]

// Response: Array of responses (may be out of order)
[
  { "jsonrpc": "2.0", "id": 2, "result": "0xde0b6b3a7640000" },
  { "jsonrpc": "2.0", "id": 1, "result": "0x1234567" }
]
Instead of N round-trips, you make 1. This significantly reduces latency, especially for high-latency connections.

Quick Start

import { createBatchedProvider } from '@voltaire/batched-provider';

// Create provider with HTTP endpoint
const provider = createBatchedProvider('https://eth.llamarpc.com');

// Concurrent requests are automatically batched
const [blockNumber, balance, code] = await Promise.all([
  provider.request({ method: 'eth_blockNumber', params: [] }),
  provider.request({ method: 'eth_getBalance', params: ['0x...', 'latest'] }),
  provider.request({ method: 'eth_getCode', params: ['0x...', 'latest'] })
]);

Configuration

const provider = createBatchedProvider({
  http: {
    url: 'https://eth.llamarpc.com',
    headers: { 'X-API-Key': 'your-key' },
    timeout: 30000,
  },
  wait: 10,         // Debounce window (ms) - default: 10
  maxBatchSize: 100 // Max requests per batch - default: 100
});

Options

OptionDefaultDescription
wait10Milliseconds to wait before sending batch. Requests within this window are batched together.
maxBatchSize100Maximum requests per batch. Triggers immediate send when reached.
timeout30000HTTP request timeout in milliseconds.

Wrapping Existing Providers

Wrap any EIP-1193 provider (MetaMask, WalletConnect, etc.):
import { wrapProvider } from '@voltaire/batched-provider';

// Wrap injected provider
const batched = wrapProvider(window.ethereum, { wait: 10 });

// Use like normal provider
const accounts = await batched.request({ method: 'eth_accounts' });
Wrapping non-HTTP providers doesn’t provide true batching. Requests are executed in parallel but still make separate calls. Use HTTP transport for true batching.

Performance Benefits

Without Batching

Request 1 -----> [50ms RTT] <----- Response 1
Request 2 -----> [50ms RTT] <----- Response 2
Request 3 -----> [50ms RTT] <----- Response 3
Total: 150ms

With Batching

[Req1, Req2, Req3] -----> [50ms RTT] <----- [Resp1, Resp2, Resp3]
Total: 50ms
For 10 concurrent requests with 50ms RTT:
  • Without batching: ~500ms
  • With batching: ~50ms (10x faster)

When to Use Batching vs Multicall

Use CaseBatchingMulticall
Multiple RPC methodsYesNo
Same contract, multiple callsYesYes
Atomic readsNoYes
Gas efficiencyN/ABetter
No contract deploymentYesNo
Batching: Best for mixed RPC methods or when you don’t need atomicity. Multicall: Best for multiple reads from the same contract that must be from the same block.

Error Handling

Per-Request Errors

Individual requests can fail independently:
const [blockNumber, badCall] = await Promise.allSettled([
  provider.request({ method: 'eth_blockNumber' }),
  provider.request({ method: 'eth_call', params: [{ to: '0x...' }, 'latest'] })
]);

if (blockNumber.status === 'fulfilled') {
  console.log('Block:', blockNumber.value);
}

if (badCall.status === 'rejected') {
  console.log('Call failed:', badCall.reason.message);
}

Batch-Level Errors

Network failures reject all pending requests:
try {
  const results = await Promise.all([
    provider.request({ method: 'eth_blockNumber' }),
    provider.request({ method: 'eth_chainId' })
  ]);
} catch (error) {
  // Both requests failed due to network error
  console.error('Batch failed:', error.message);
}

Error Types

import {
  RpcError,
  BatchTimeoutError,
  HttpError,
  MissingResponseError
} from '@voltaire/batched-provider';

try {
  await provider.request({ method: 'eth_call', params: [...] });
} catch (error) {
  if (error instanceof RpcError) {
    console.log('RPC error code:', error.code);
    console.log('RPC error data:', error.data);
  }
  if (error instanceof BatchTimeoutError) {
    console.log('Timed out after:', error.timeout, 'ms');
  }
}

Advanced Usage

Force Flush

Send pending requests immediately without waiting for debounce:
provider.request({ method: 'eth_blockNumber' });
provider.request({ method: 'eth_chainId' });

// Don't wait for debounce
await provider.flush();

Check Pending Count

provider.request({ method: 'eth_blockNumber' });
console.log('Pending:', provider.getPendingCount()); // 1

provider.request({ method: 'eth_chainId' });
console.log('Pending:', provider.getPendingCount()); // 2

Cleanup

// Reject all pending requests and prevent new ones
provider.destroy();

Implementation Details

Request Flow

  1. request() adds request to queue, returns Promise
  2. Timer starts (or resets) for debounce window
  3. After wait ms (or maxBatchSize reached), batch is sent
  4. Responses are routed back to callers by id
  5. Individual Promises resolve/reject based on response

Response Routing

Responses are matched by id field, not array position. This handles out-of-order responses correctly:
// Requests sent: [id:1, id:2, id:3]
// Responses received: [id:3, id:1, id:2]
// Still routes correctly to original callers

See Also