Required options
Identifies the AI assistant in tracked changes and comments.
The LLM backend used for completions and streaming. Can be a provider configuration object (OpenAI, Anthropic, HTTP) or a custom provider instance. See Provider Configuration below.
Optional options
Overrides the default SuperDoc-centric system message. Use this to customize how the AI interprets document context and user instructions.
Emits parsing and traversal warnings to the console for debugging purposes.
Lifecycle callback fired when the AI is initialized and ready. See Hooks for details.
Lifecycle callback fired for each streaming chunk. See Hooks for details.
Provider configuration
provider accepts either a config object (OpenAI, Anthropic, HTTP) or a custom implementation that exposes getCompletion and streamCompletion.
Browser vs Server: For browser applications, use the HTTP gateway pattern to keep API keys secure on your backend. OpenAI and Anthropic providers are for server-side use only (Next.js API routes, Node.js scripts, etc.).
HTTP gateway (Browser-safe)
Recommended for browser applications. Your backend handles API keys securely:Separate URL for streaming requests. If not provided, falls back to
url for all requests.HTTP method for requests
Custom function to build the request body. Receives
{ messages, stream, options } context.Custom function to parse non-streaming responses. Receives the response payload and should return a string.
Custom function to parse each streaming chunk. Receives the chunk payload and should return a string or undefined.
OpenAI (Server-side only)
For server-side environments (Next.js API routes, Node.js, backend scripts):Custom completion endpoint path (useful for Azure OpenAI or custom deployments)
OpenAI organization ID for API requests
Additional OpenAI-specific request options passed directly to the API
Anthropic (Server-side only)
For server-side environments (Next.js API routes, Node.js, backend scripts):Anthropic API version to use
Additional Anthropic-specific request options passed directly to the API
Custom provider instance
Bring your own provider that implements theAIProvider interface:
Common provider options
All provider configurations support these common options:Controls randomness (0-2). Lower values make output more focused and deterministic.
Maximum tokens to generate in responses
Stop sequences to end generation early
When true, actions like
insertContent and summarize will stream results back. Provider must support streaming.Custom HTTP headers to include in requests
Custom fetch implementation (useful for Node.js environments or custom HTTP logic)
Base URL for the API endpoint (OpenAI and Anthropic only)
Helper functions
createAIProvider
Factory function for creating providers from configuration objects.AIActions automatically calls createAIProvider() internally, so you can pass configuration objects directly. This helper is useful for creating providers outside of initialization.isAIProvider
Type guard to check if a value implements theAIProvider interface.

