Skip to main content
AI Actions requires two pieces of configuration: a user identity for AI-generated changes and a provider for LLM completions.

Required options

user
AIUser
required
Identifies the AI assistant in tracked changes and comments.
user: {
  displayName: 'RedlineBot',
  userId: 'ai-assistant',      // optional
  profileUrl: 'https://...'    // optional
}
provider
AIProviderInput
required
The LLM backend used for completions and streaming. Can be a provider configuration object (OpenAI, Anthropic, HTTP) or a custom provider instance. See Provider Configuration below.

Optional options

systemPrompt
string
Overrides the default SuperDoc-centric system message. Use this to customize how the AI interprets document context and user instructions.
enableLogging
boolean
default:"false"
Emits parsing and traversal warnings to the console for debugging purposes.
onReady
function
Lifecycle callback fired when the AI is initialized and ready. See Hooks for details.
onStreamingStart
function
Lifecycle callback fired when streaming begins. See Hooks for details.
onStreamingPartialResult
function
Lifecycle callback fired for each streaming chunk. See Hooks for details.
onStreamingEnd
function
Lifecycle callback fired when streaming completes. See Hooks for details.
onError
function
Lifecycle callback fired when an error occurs. See Hooks for details.

Provider configuration

provider accepts either a config object (OpenAI, Anthropic, HTTP) or a custom implementation that exposes getCompletion and streamCompletion.
Browser vs Server: For browser applications, use the HTTP gateway pattern to keep API keys secure on your backend. OpenAI and Anthropic providers are for server-side use only (Next.js API routes, Node.js scripts, etc.).

HTTP gateway (Browser-safe)

Recommended for browser applications. Your backend handles API keys securely:
const ai = new AIActions(superdoc, {
  user,
  provider: {
    type: 'http',
    url: '/api/ai/complete', // Your backend endpoint
    headers: {
      'Authorization': `Bearer ${userAuthToken}`,
    },
  },
});
For custom AI gateways or internal endpoints with advanced configuration:
provider: {
  type: 'http',
  url: 'https://your-ai-gateway/complete',

  // Optional configuration
  streamUrl: 'https://your-ai-gateway/stream',
  method: 'POST',
  streamResults: true,
  headers: {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${userToken}`,
  },
  buildRequestBody: ({ messages, stream, options }) => ({
    stream,
    messages,
    model: options?.model ?? 'gpt-4o-mini',
    temperature: options?.temperature ?? 0.7,
    metadata: options?.metadata,
  }),
  parseCompletion: payload => payload.choices?.[0]?.message?.content ?? '',
  parseStreamChunk: payload => payload.choices?.[0]?.delta?.content ?? '',
}
HTTP-specific options:
streamUrl
string
Separate URL for streaming requests. If not provided, falls back to url for all requests.
method
string
default:"POST"
HTTP method for requests
buildRequestBody
function
Custom function to build the request body. Receives { messages, stream, options } context.
parseCompletion
function
Custom function to parse non-streaming responses. Receives the response payload and should return a string.
parseStreamChunk
function
Custom function to parse each streaming chunk. Receives the chunk payload and should return a string or undefined.

OpenAI (Server-side only)

Security: Never use this provider in browser code. API keys will be exposed. Use the HTTP gateway pattern instead.
For server-side environments (Next.js API routes, Node.js, backend scripts):
const ai = new AIActions(superdoc, {
  user,
  provider: {
    type: 'openai',
    apiKey: process.env.OPENAI_API_KEY!,
    model: 'gpt-4o',

    // Optional configuration
    baseURL: 'https://api.openai.com/v1',
    organizationId: 'org_123',
    completionPath: '/chat/completions',
    temperature: 0.7,
    maxTokens: 2000,
    streamResults: true,
    headers: { 'OpenAI-Beta': 'assistants=v2' },
    requestOptions: { /* additional OpenAI options */ },
  },
});
OpenAI-specific options:
completionPath
string
default:"/chat/completions"
Custom completion endpoint path (useful for Azure OpenAI or custom deployments)
organizationId
string
OpenAI organization ID for API requests
requestOptions
Record<string, any>
Additional OpenAI-specific request options passed directly to the API

Anthropic (Server-side only)

Security: Never use this provider in browser code. API keys will be exposed. Use the HTTP gateway pattern instead.
For server-side environments (Next.js API routes, Node.js, backend scripts):
provider: {
  type: 'anthropic',
  apiKey: process.env.ANTHROPIC_API_KEY!,
  model: 'claude-3-5-sonnet-20241022',

  // Optional configuration
  baseURL: 'https://api.anthropic.com',
  apiVersion: '2023-06-01',
  temperature: 0.7,
  maxTokens: 2000,
  streamResults: true,
  headers: { /* custom headers */ },
  requestOptions: { /* additional Anthropic options */ },
}
Anthropic-specific options:
apiVersion
string
default:"2023-06-01"
Anthropic API version to use
requestOptions
Record<string, any>
Additional Anthropic-specific request options passed directly to the API

Custom provider instance

Bring your own provider that implements the AIProvider interface:
const provider = {
  streamResults: true,  // optional

  async *streamCompletion(messages, options) {
    // Yield tokens incrementally
    yield 'Hello ';
    yield 'world';
  },

  async getCompletion(messages, options) {
    // Return complete response
    return 'response';
  },
};

const ai = new AIActions(superdoc, { user, provider });

Common provider options

All provider configurations support these common options:
temperature
number
Controls randomness (0-2). Lower values make output more focused and deterministic.
maxTokens
number
Maximum tokens to generate in responses
stop
string[]
Stop sequences to end generation early
streamResults
boolean
When true, actions like insertContent and summarize will stream results back. Provider must support streaming.
headers
Record<string, string>
Custom HTTP headers to include in requests
fetch
FetchLike
Custom fetch implementation (useful for Node.js environments or custom HTTP logic)
baseURL
string
Base URL for the API endpoint (OpenAI and Anthropic only)

Helper functions

createAIProvider

Factory function for creating providers from configuration objects.
import { createAIProvider } from '@superdoc-dev/ai';

const provider = createAIProvider({
  type: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4o',
});

// Use with AIActions
const ai = new AIActions(superdoc, { user, provider });
AIActions automatically calls createAIProvider() internally, so you can pass configuration objects directly. This helper is useful for creating providers outside of initialization.

isAIProvider

Type guard to check if a value implements the AIProvider interface.
import { isAIProvider } from '@superdoc-dev/ai';

if (isAIProvider(value)) {
  // TypeScript knows value is an AIProvider
  await value.getCompletion([...]);
}