Docs/AI SDK

AI

SDK Configuration

Last updated April 17, 2026

Base URL, custom headers, edge runtime compatibility, and web telemetry.

TypeScript SDK

The TypeScript SDK currently supports three client configuration fields:

Codetext
import { Cencori } from 'cencori';
 
const cencori = new Cencori({
  apiKey: process.env.CENCORI_API_KEY,
  baseUrl: 'https://cencori.com',
  headers: {
    'X-Trace-ID': 'req_123',
  },
});

What Each Option Does

  • apiKey: your project secret key (csk_...)
  • baseUrl: override the default Cencori API origin
  • headers: attach custom headers to every request

Vercel AI SDK Provider

Use createCencori() when you need explicit configuration for the Vercel AI SDK provider:

Codetext
import { createCencori } from 'cencori/vercel';
 
export const cencori = createCencori({
  apiKey: process.env.CENCORI_API_KEY!,
  baseUrl: 'https://cencori.com',
  headers: {
    'X-Trace-ID': 'req_123',
  },
});

TanStack AI Adapter

The TanStack adapter exposes the same configuration surface:

Codetext
import { createCencori } from 'cencori/tanstack';
 
const provider = createCencori({
  apiKey: process.env.CENCORI_API_KEY!,
  baseUrl: 'https://cencori.com',
});
 
const adapter = provider('gpt-4o');

Gateway Reliability vs SDK Configuration

Retries, failover, and circuit breaking are gateway behaviors handled inside Cencori. They are not configured through extra TypeScript SDK flags today.

Edge Compatibility

The TypeScript SDK and framework adapters use fetch and work in Edge runtimes such as Vercel Edge Functions, Cloudflare Workers, and Deno.

Codetext
export const config = {
  runtime: 'edge',
};

Web Telemetry

You can report web traffic from your application to the Cencori dashboard:

Codetext
await cencori.telemetry.reportWebRequest({
  host: req.headers.get('host') || 'unknown',
  method: req.method,
  path: new URL(req.url).pathname,
  statusCode: response.status,
  latencyMs: Date.now() - startTime,
});

This is best-effort telemetry. It does not block your request path.