# Cencori - Integration Contract for Code Agents
This file is a strict integration guide for code agents and automation tools.
Use it as the source of truth for:
- package names
- import paths
- environment variables
- base URLs
- request/response shapes
- stable public APIs
Do not treat this file as a product roadmap.
Only use the APIs and patterns documented here.
## What Cencori Is
Cencori is the runtime control layer for production AI.
Your application sends AI traffic to Cencori, and Cencori handles routing, security enforcement, observability, and cost tracking.
## Stable Public Surfaces
Use one of these integration paths:
1. Official TypeScript SDK
- Package: `cencori`
- Best for: server routes, backend services, direct SDK usage
2. Vercel AI SDK provider
- Import: `cencori/vercel`
- Best for: `streamText()`, `generateText()`, `useChat()`
3. TanStack AI adapter
- Import: `cencori/tanstack`
- Best for: `@tanstack/ai`
4. OpenAI-compatible endpoint
- Base URL: `https://api.cencori.com/v1`
- Best for: OpenAI-compatible SDKs, agent frameworks, desktop tools
5. Native Cencori HTTP endpoints
- Base URL: `https://cencori.com`
- Best for: direct calls to `/api/ai/*` or `/api/v1/telemetry/web`
## Do Not Use These As Public Contract Yet
Avoid generating code against these surfaces unless the target project already implements and verifies them:
- `cencori.compute.*`
- `cencori.workflow.*`
- `cencori.workflows.*`
- `cencori.steps.*`
- `cencori.storage.*`
- speculative workflow APIs like `waitFor()`, `define()`, `sendEvent()`, `transferToHuman()`
- unsupported TypeScript SDK config fields like `timeout`, `retries`, `maxRetries`, `failover`, `fallbackModels`, `circuitBreaker`
- undocumented SDK telemetry flags like `CENCORI_TELEMETRY=0`
- public memory-search/filter/hybrid-search assumptions unless verified in the target project
## Security Rules
- Use `CENCORI_API_KEY` for server-side secrets.
- Project secret keys use the `csk_...` prefix.
- Never expose `csk_...` keys in client-side code.
- Do not use `NEXT_PUBLIC_*` env vars for secret Cencori keys.
- Use `https://api.cencori.com/v1` only for OpenAI-compatible clients.
- Use the SDK default base URL unless you intentionally need to override it.
## Recommended Next.js Setup
Assumption: Next.js App Router with Vercel AI SDK.
### Install
```bash
npm install cencori ai
```
### Environment
```bash
# .env.local
CENCORI_API_KEY=csk_live_...
```
### Shared Cencori Setup
```typescript
// lib/cencori.ts
import { Cencori } from 'cencori';
import { cencori } from 'cencori/vercel';
export const cencoriClient = new Cencori({
apiKey: process.env.CENCORI_API_KEY!,
});
export { cencori };
```
### Streaming Chat Route
```typescript
// app/api/chat/route.ts
import { streamText } from 'ai';
import { cencori } from '@/lib/cencori';
export async function POST(req: Request) {
const { messages, model = 'gpt-4o' } = await req.json();
const result = streamText({
model: cencori(model),
messages,
});
return result.toDataStreamResponse();
}
```
### Client Chat UI
```tsx
// app/page.tsx
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat',
body: { model: 'gpt-4o' },
});
return (
{messages.map((message) => (
{message.content}
))}
);
}
```
### Optional Web Telemetry
```typescript
// proxy.ts
import type { NextRequest } from 'next/server';
import { NextResponse } from 'next/server';
import { cencoriClient } from '@/lib/cencori';
export async function middleware(request: NextRequest) {
const startedAt = Date.now();
const response = NextResponse.next();
void cencoriClient.telemetry.reportWebRequest({
host: request.headers.get('host') || 'unknown',
method: request.method,
path: request.nextUrl.pathname,
queryString: request.nextUrl.search ? request.nextUrl.search.slice(1) : undefined,
statusCode: response.status,
userAgent: request.headers.get('user-agent') || undefined,
referer: request.headers.get('referer') || undefined,
latencyMs: Date.now() - startedAt,
});
return response;
}
```
## Official TypeScript SDK
### Install
```bash
npm install cencori
```
### Initialize
```typescript
import { Cencori } from 'cencori';
const cencori = new Cencori({
apiKey: process.env.CENCORI_API_KEY,
});
```
### Supported Client Configuration
The TypeScript SDK currently supports only:
- `apiKey`
- `baseUrl`
- `headers`
Example:
```typescript
const cencori = new Cencori({
apiKey: process.env.CENCORI_API_KEY,
baseUrl: 'https://cencori.com',
headers: {
'X-Trace-ID': 'req_123',
},
});
```
Do not generate SDK code using extra config fields that are not listed above.
## Core SDK Methods
### Chat
```typescript
const response = await cencori.ai.chat({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
temperature: 0.2,
maxTokens: 300,
});
console.log(response.content);
console.log(response.toolCalls);
console.log(response.usage.totalTokens);
```
### Chat Streaming
```typescript
const stream = cencori.ai.chatStream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a story.' }],
});
for await (const chunk of stream) {
process.stdout.write(chunk.delta);
}
```
### Structured Output
```typescript
const response = await cencori.ai.generateObject({
model: 'gpt-4o',
prompt: 'Generate a fictional user profile.',
schema: {
type: 'object',
properties: {
name: { type: 'string' },
age: { type: 'number' },
},
required: ['name', 'age'],
},
});
console.log(response.object);
```
### Embeddings
```typescript
const response = await cencori.ai.embeddings({
model: 'text-embedding-3-small',
input: 'Hello world',
});
console.log(response.embeddings[0]);
```
### Image Generation
```typescript
const response = await cencori.ai.generateImage({
prompt: 'A futuristic city at sunset',
model: 'gpt-image-1.5',
size: '1024x1024',
});
console.log(response.images[0].url);
```
### RAG
```typescript
const response = await cencori.ai.rag({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What is our refund policy?' }],
namespace: 'docs',
limit: 5,
});
console.log(response.message.content);
console.log(response.sources);
```
### Web Telemetry
```typescript
await cencori.telemetry.reportWebRequest({
host: 'app.example.com',
method: 'GET',
path: '/api/chat',
statusCode: 200,
latencyMs: 42,
});
```
## SDK Chat Response Shape
`cencori.ai.chat()` returns a TypeScript SDK response with camelCase usage fields:
```json
{
"id": "chatcmpl_123",
"model": "gpt-4o",
"content": "Hello! How can I help?",
"toolCalls": null,
"finishReason": "stop",
"usage": {
"promptTokens": 13,
"completionTokens": 7,
"totalTokens": 20
}
}
```
## Vercel AI SDK
### Install
```bash
npm install cencori ai
```
### Default Provider
```typescript
import { cencori } from 'cencori/vercel';
import { generateText } from 'ai';
const result = await generateText({
model: cencori('gpt-4o'),
prompt: 'Write a haiku about AI infrastructure.',
});
console.log(result.text);
```
### Custom Provider
```typescript
import { createCencori } from 'cencori/vercel';
export const cencori = createCencori({
apiKey: process.env.CENCORI_API_KEY!,
});
```
Preferred import path for Vercel AI SDK is `cencori/vercel`.
Do not prefer the root-package re-export in generated examples.
## TanStack AI
### Install
```bash
npm install cencori @tanstack/ai
```
### Default Adapter
```typescript
import { chat } from '@tanstack/ai';
import { cencori } from 'cencori/tanstack';
for await (const chunk of chat({
adapter: cencori('gpt-4o'),
messages: [{ role: 'user', content: 'Hello world' }],
})) {
if (chunk.type === 'content') {
console.log(chunk.delta);
}
}
```
### Custom Adapter Factory
```typescript
import { createCencori } from 'cencori/tanstack';
const provider = createCencori({
apiKey: process.env.CENCORI_API_KEY!,
});
const adapter = provider('gpt-4o');
```
## OpenAI-Compatible Clients
Use this mode when a tool already expects an OpenAI-compatible client.
### Required Settings
- `api_key`: your Cencori project key (`csk_...`)
- `base_url`: `https://api.cencori.com/v1`
### Python
```python
from openai import OpenAI
client = OpenAI(
api_key="your_cencori_api_key",
base_url="https://api.cencori.com/v1"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
```
### Node.js
```typescript
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.CENCORI_API_KEY,
baseURL: 'https://api.cencori.com/v1',
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
```
### Agent Frameworks And Desktop Tools
For OpenAI-compatible frameworks and tools, set:
```bash
OPENAI_BASE_URL=https://api.cencori.com/v1
OPENAI_API_BASE=https://api.cencori.com/v1
OPENAI_API_KEY=$CENCORI_API_KEY
```
This applies to tools such as:
- Continue
- CrewAI
- LangChain `ChatOpenAI`
- AutoGen
- other OpenAI-compatible agent runtimes
## Native Cencori HTTP Endpoints
Base origin:
- `https://cencori.com`
Common endpoints:
- `POST /api/ai/chat`
- `POST /api/ai/embeddings`
- `POST /api/ai/images/generate`
- `POST /api/ai/rag`
- `POST /api/memory/store`
- `GET /api/memory/namespaces`
- `POST /api/memory/namespaces`
- `GET /api/memory/{id}`
- `DELETE /api/memory/{id}`
- `POST /api/v1/telemetry/web`
### Native Chat Example
```bash
curl https://cencori.com/api/ai/chat \
-H "CENCORI_API_KEY: csk_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'
```
### Native Chat Response
The native chat endpoint includes an OpenAI-compatible `choices[0].message` shape and Cencori convenience fields such as `content`, `toolCalls`, `cost_usd`, and `finish_reason`.
## OpenAI-Compatible HTTP Endpoint
Base origin:
- `https://api.cencori.com/v1`
### Chat Completions Example
```bash
curl https://api.cencori.com/v1/chat/completions \
-H "Authorization: Bearer csk_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```
### OpenAI-Compatible Response Shape
```json
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 7,
"total_tokens": 20
}
}
```
## Authentication Rules
### Use `CENCORI_API_KEY` Header For
- `https://cencori.com/api/ai/*`
- `https://cencori.com/api/memory/*`
- `https://cencori.com/api/v1/telemetry/web`
### Use `Authorization: Bearer ...` For
- `https://api.cencori.com/v1/*`
## Model Selection
One Cencori project key can be used across many models.
Choose the model per request:
```typescript
await cencori.ai.chat({ model: 'gpt-4o', messages });
await cencori.ai.chat({ model: 'claude-sonnet-4.5', messages });
await cencori.ai.chat({ model: 'gemini-2.5-flash', messages });
```
## Memory Guidance
Public memory flows that are safe to generate today:
- create namespace
- list namespaces
- store memory
- get/delete memory by ID
- use `cencori.ai.rag()` or `POST /api/ai/rag` for retrieval-backed generation
Do not assume advanced public memory features like hybrid search, BM25, metadata-operator filtering, import/export CLI, or TTL-specific helper fields unless the target project already verifies them.
If you store memories directly, use `expiresAt` when you need expiry:
```typescript
await cencori.memory.store({
namespace: 'session-123',
content: 'User is interested in pricing',
expiresAt: '2026-12-31T23:59:59Z',
});
```
## Decision Rules For Code Agents
When integrating Cencori into a project:
1. Prefer the smallest working integration.
2. Reuse existing routes, auth, and env patterns.
3. Keep `CENCORI_API_KEY` on the server.
4. Prefer `cencori/vercel` when the project already uses Vercel AI SDK.
5. Prefer the OpenAI-compatible base URL only when a framework expects it.
6. Preserve the app's existing response contract when replacing another provider.
7. Do not generate code against `compute`, `workflow`, or speculative deployment APIs.
8. Do not assume undocumented routing, failover, or evaluation settings are user-configurable.
## Documentation Links
- Docs: https://cencori.com/docs
- Vercel AI SDK: https://cencori.com/docs/integrations/vercel-ai-sdk
- TanStack AI: https://cencori.com/docs/integrations/tanstack
- Authentication: https://cencori.com/docs/api/authentication
- Chat API: https://cencori.com/docs/api/chat
- Continue: https://cencori.com/docs/agentic-engineering/desktop/continue