API Reference
Chat API
Last updated April 17, 2026
Reference for Cencori chat completions across the official SDK and OpenAI-compatible HTTP endpoints.
Overview
Cencori exposes chat completions through two main surfaces:
- the official SDKs (
cencori,cencori/vercel,cencori/tanstack) - the OpenAI-compatible endpoint at
https://api.cencori.com/v1/chat/completions
Both routes give you Cencori's routing, security enforcement, logging, and cost tracking.
Official TypeScript SDK
import { Cencori } from 'cencori';
const cencori = new Cencori({
apiKey: process.env.CENCORI_API_KEY,
});
const response = await cencori.ai.chat({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' },
],
temperature: 0.2,
maxTokens: 300,
});
console.log(response.content);
console.log(response.toolCalls);
console.log(response.usage.totalTokens);SDK Response Shape
{
"id": "chatcmpl_123",
"model": "gpt-4o",
"content": "The capital of France is Paris.",
"toolCalls": null,
"finishReason": "stop",
"usage": {
"promptTokens": 13,
"completionTokens": 7,
"totalTokens": 20
}
}Native Cencori HTTP Endpoint
Use the native endpoint when you want direct access to /api/ai/chat:
curl https://cencori.com/api/ai/chat \
-H "CENCORI_API_KEY: csk_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'This endpoint returns the OpenAI-compatible choices[0].message shape and also includes Cencori convenience fields such as content, toolCalls, and cost_usd.
OpenAI-Compatible Endpoint
Use this when a client or framework already expects the OpenAI Chat Completions API:
curl https://api.cencori.com/v1/chat/completions \
-H "Authorization: Bearer csk_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'Request Parameters
| Field | Type | Required | Notes |
|---|---|---|---|
model | string | Yes | Any model routed through Cencori |
messages | array | Yes | Conversation history |
temperature | number | No | Sampling temperature |
maxTokens | number | No | Max output tokens |
stream | boolean | No | Stream the response |
tools | array | No | Function/tool definitions |
toolChoice | string or object | No | Tool selection mode |
userId | string | No | End-user identifier for attribution |
OpenAI-Compatible Response Shape
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 7,
"total_tokens": 20
}
}Streaming
SDK Streaming
const stream = cencori.ai.chatStream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a story.' }],
});
for await (const chunk of stream) {
process.stdout.write(chunk.delta);
}HTTP Streaming
curl -N https://api.cencori.com/v1/chat/completions \
-H "Authorization: Bearer csk_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Tell me a story."}],
"stream": true
}'Tool Calling
const response = await cencori.ai.chat({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }],
tools: [
{
type: 'function',
function: {
name: 'get_weather',
description: 'Get weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string' },
},
required: ['location'],
},
},
},
],
});
console.log(response.toolCalls);Error Handling
Handle transient errors, security blocks, and rate limits in your application:
try {
await cencori.ai.chat({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
} catch (error) {
console.error(error);
}