Skip to main content
Skill — Copyable reference implementation. Use as-is or customize. See Skills Philosophy.

Event Listening

This guide covers subscribing to live contract events and querying historical event logs using Voltaire’s Contract and EventStream APIs.

Prerequisites

import { Contract, EventStream } from '@voltaire/contract';
import type { TypedProvider } from '@voltaire/provider';

// ERC20 ABI with Transfer and Approval events
const erc20Abi = [
  {
    type: 'event',
    name: 'Transfer',
    inputs: [
      { type: 'address', name: 'from', indexed: true },
      { type: 'address', name: 'to', indexed: true },
      { type: 'uint256', name: 'value', indexed: false },
    ],
  },
  {
    type: 'event',
    name: 'Approval',
    inputs: [
      { type: 'address', name: 'owner', indexed: true },
      { type: 'address', name: 'spender', indexed: true },
      { type: 'uint256', name: 'value', indexed: false },
    ],
  },
] as const;

Creating a Contract Instance

const usdc = Contract({
  address: '0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48',
  abi: erc20Abi,
  provider
});

Subscribing to Live Events

Use watch() to poll for new events as they occur:
const stream = usdc.events.Transfer({});

for await (const { log, metadata } of stream.watch({
  pollingInterval: 2000  // Poll every 2 seconds (default)
})) {
  console.log('New transfer detected');
  console.log('From:', log.args.from);
  console.log('To:', log.args.to);
  console.log('Value:', log.args.value);
  console.log('Block:', metadata.currentBlock);
}

Start Watching from a Specific Block

for await (const { log } of stream.watch({
  fromBlock: 19000000n
})) {
  processTransfer(log);
}

Stop Watching with AbortSignal

import { EventStreamAbortedError } from '@voltaire/contract';

const controller = new AbortController();

// Stop after 30 seconds
setTimeout(() => controller.abort(), 30000);

try {
  for await (const { log } of stream.watch({
    signal: controller.signal
  })) {
    console.log('Transfer:', log.args.value);
  }
} catch (error) {
  if (error instanceof EventStreamAbortedError) {
    console.log('Stopped watching');
  }
}

Querying Historical Events

Use backfill() to fetch events from a specific block range:
const stream = usdc.events.Transfer({});

for await (const { log, metadata } of stream.backfill({
  fromBlock: 18000000n,
  toBlock: 18001000n
})) {
  console.log(`Block ${log.blockNumber}: ${log.args.value} tokens`);
}
// Loop completes after processing all historical events

Dynamic Chunking for Large Ranges

EventStream automatically handles large block ranges by chunking requests:
  • Starts with 500 blocks per request (configurable)
  • Reduces chunk size by 50% on “block range too large” errors
  • Increases chunk size by 25% after 5 consecutive successes
for await (const { log } of stream.backfill({
  fromBlock: 0n,
  toBlock: 19000000n,
  chunkSize: 1000,      // Initial chunk size (default: 500)
  minChunkSize: 50      // Minimum after reduction (default: 10)
})) {
  processEvent(log);
}

Backfill Then Watch Pattern

Process all historical events, then continue with live events:
const stream = usdc.events.Transfer({});
const currentBlock = 19500000n;

// First: get all historical events
for await (const { log } of stream.backfill({
  fromBlock: 0n,
  toBlock: currentBlock
})) {
  processEvent(log);
}

// Then: watch for new events starting from current block
for await (const { log } of stream.watch({
  fromBlock: currentBlock
})) {
  processEvent(log);
}

Filtering by Topic

Filter events by indexed parameters when creating the stream:

Filter by Sender

const stream = usdc.events.Transfer({
  from: '0x742d35Cc6634C0532925a3b844Bc454e4438f44e'
});

for await (const { log } of stream.backfill({
  fromBlock: 18000000n,
  toBlock: 18100000n
})) {
  // Only transfers FROM the specified address
  console.log('Outgoing transfer:', log.args.value);
}

Filter by Recipient

const stream = usdc.events.Transfer({
  to: '0x742d35Cc6634C0532925a3b844Bc454e4438f44e'
});

Filter by Both Sender and Recipient

const stream = usdc.events.Transfer({
  from: '0xSenderAddress...',
  to: '0xReceiverAddress...'
});

Decoding Event Data

Each yielded result contains the decoded log and metadata:
for await (const { log, metadata } of stream.backfill({
  fromBlock: 18000000n,
  toBlock: 18001000n
})) {
  // Decoded event log
  console.log('Event name:', log.eventName);        // 'Transfer'
  console.log('From:', log.args.from);              // AddressType
  console.log('To:', log.args.to);                  // AddressType
  console.log('Value:', log.args.value);            // bigint
  console.log('Block number:', log.blockNumber);    // BlockNumberType
  console.log('Block hash:', log.blockHash);        // HashType
  console.log('Tx hash:', log.transactionHash);     // TransactionHashType
  console.log('Log index:', log.logIndex);          // number

  // Block context metadata
  console.log('Current block height:', metadata.currentBlock);
  console.log('Query range:', metadata.fromBlock, '-', metadata.toBlock);
}

Standalone EventStream

Create an EventStream without a Contract for more control:
import { EventStream } from '@voltaire/contract';

const transferEvent = {
  type: 'event' as const,
  name: 'Transfer' as const,
  inputs: [
    { type: 'address' as const, name: 'from' as const, indexed: true },
    { type: 'address' as const, name: 'to' as const, indexed: true },
    { type: 'uint256' as const, name: 'value' as const, indexed: false },
  ],
};

const stream = EventStream({
  provider,
  address: '0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48',
  event: transferEvent,
  filter: { from: '0x742d35Cc6634C0532925a3b844Bc454e4438f44e' }
});

for await (const { log } of stream.backfill({
  fromBlock: 18000000n,
  toBlock: 19000000n
})) {
  console.log(log.args);
}

Retry Configuration

Configure retry behavior for transient RPC errors:
for await (const { log } of stream.backfill({
  fromBlock: 0n,
  toBlock: 1000000n,
  retry: {
    maxRetries: 5,         // Max retry attempts (default: 3)
    initialDelay: 1000,    // Initial delay ms (default: 1000)
    maxDelay: 30000        // Max delay ms with exponential backoff (default: 30000)
  }
})) {
  processEvent(log);
}

Error Handling

import {
  BlockRangeTooLargeError,
  EventStreamAbortedError
} from '@voltaire/contract';

try {
  for await (const { log } of stream.backfill({
    fromBlock: 0n,
    toBlock: 1000000n
  })) {
    console.log(log);
  }
} catch (error) {
  if (error instanceof EventStreamAbortedError) {
    console.log('Stream was cancelled via AbortSignal');
  } else if (error instanceof BlockRangeTooLargeError) {
    console.log('Block range exceeded RPC limit');
  } else {
    console.error('Unexpected error:', error);
  }
}

Low-Level: Using eth_getLogs Directly

For maximum control, use the provider’s eth_getLogs method directly:
import * as Hex from '@voltaire/primitives/Hex';
import { Keccak256 } from '@voltaire/crypto/Keccak256';

// Compute event signature hash
const transferSig = 'Transfer(address,address,uint256)';
const topicHash = Keccak256.hashString(transferSig);

const logs = await provider.request({
  method: 'eth_getLogs',
  params: [{
    address: '0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48',
    topics: [Hex.fromBytes(topicHash)],
    fromBlock: '0x1234',
    toBlock: 'latest'
  }]
});

for (const log of logs) {
  // Raw log data - decode manually or use Abi.decodeLog
  console.log('Topics:', log.topics);
  console.log('Data:', log.data);
}

Collecting Events into an Array

Break out of the stream after collecting a set number of events:
const events: Array<{
  from: string;
  to: string;
  value: bigint;
  block: bigint;
}> = [];

for await (const { log } of stream.backfill({
  fromBlock: 18000000n,
  toBlock: 19000000n
})) {
  events.push({
    from: String(log.args.from),
    to: String(log.args.to),
    value: log.args.value as bigint,
    block: BigInt(log.blockNumber)
  });

  if (events.length >= 100) {
    break; // Stop after 100 events
  }
}

console.log(`Collected ${events.length} transfers`);

Break on Condition

Stop streaming when a specific condition is met:
const threshold = 1000000n * 10n ** 6n; // 1M USDC (6 decimals)

for await (const { log } of stream.watch()) {
  if (log.args.value > threshold) {
    console.log('Large transfer detected!', log.args.value);
    break; // Stream cleanup happens automatically
  }
}