Skip to main content
The SDK auto-detects provider type from the client object. No configuration needed.

Supported providers

ProviderClientMethod intercepted
OpenAInew OpenAI()chat.completions.create
Anthropicnew Anthropic()messages.create
Googlenew GoogleGenerativeAI()generateContent
Vercel AIopenai / anthropic / googlegenerateText / streamText

How it works

The SDK uses JavaScript Proxy to intercept LLM client methods. When you call client.chat.completions.create(...), the proxy:
  1. Runs all pre-hooks on the request
  2. If no hook blocks → calls the original LLM method
  3. Runs all post-hooks on the response
  4. Returns the (potentially modified) response
The original client API is fully preserved — method signatures, return types, streaming, everything works as expected.