Skip to main content
SuperDoc tools work with any LLM provider or agent framework that supports tool use. The SDK provides tool definitions, schema formatting, and a dispatch function — you write the conversation loop.
LLM tools are in alpha. Tool names and schemas may change between releases.

AWS Bedrock

npm install @superdoc-dev/sdk @aws-sdk/client-bedrock-runtime
import { BedrockRuntimeClient, ConverseCommand } from '@aws-sdk/client-bedrock-runtime';
import { createSuperDocClient, chooseTools, dispatchSuperDocTool } from '@superdoc-dev/sdk';

const client = createSuperDocClient();
await client.connect();
await client.doc.open({ doc: './contract.docx' });

// Get tools in Anthropic format, convert to Bedrock toolSpec shape
const { tools } = await chooseTools({ provider: 'anthropic' });
const toolConfig = {
  tools: tools.map((t) => ({
    toolSpec: {
      name: t.name,
      description: t.description,
      inputSchema: { json: t.input_schema },
    },
  })),
};

const bedrock = new BedrockRuntimeClient({ region: 'us-east-1' });
const messages = [
  { role: 'user', content: [{ text: 'Review this contract.' }] },
];

while (true) {
  const res = await bedrock.send(new ConverseCommand({
    modelId: 'us.anthropic.claude-sonnet-4-6',
    messages,
    system: [{ text: 'You edit .docx files using SuperDoc tools. Use tracked changes for all edits.' }],
    toolConfig,
  }));

  const output = res.output?.message;
  if (!output) break;
  messages.push(output);

  const toolUses = output.content?.filter((b) => b.toolUse) ?? [];
  if (!toolUses.length) break;

  const results = [];
  for (const block of toolUses) {
    const { name, input, toolUseId } = block.toolUse;
    const result = await dispatchSuperDocTool(client, name, input ?? {});
    const json = typeof result === 'object' && result !== null ? result : { result };
    results.push({ toolResult: { toolUseId, content: [{ json }] } });
  }
  messages.push({ role: 'user', content: results });
}

await client.doc.save();
await client.dispose();
Auth: AWS credentials via aws configure, env vars, or IAM role. No API key needed.

Best practices

  • Feed errors back. When a tool call fails, return the error as a tool result. Most models self-correct on the next turn.
  • Pin your model version. Use a specific model ID (e.g. us.anthropic.claude-sonnet-4-6) rather than an alias to avoid behavior changes between releases.
  • Use tracked changes for review workflows. Add “Use tracked changes for all edits” to the system prompt so a human can accept or reject each change.

Example repository

Complete, runnable examples are available at examples/ai/.
  • LLM Tools — tool selection, dispatch, and the full API
  • Skills — reusable prompt templates
  • MCP Server — Model Context Protocol integration
  • SDKs — typed Node.js and Python wrappers