Docs/API Reference

API Reference

Chat API

Last updated April 17, 2026

Reference for Cencori chat completions across the official SDK and OpenAI-compatible HTTP endpoints.

Overview

Cencori exposes chat completions through two main surfaces:

  1. the official SDKs (cencori, cencori/vercel, cencori/tanstack)
  2. the OpenAI-compatible endpoint at https://api.cencori.com/v1/chat/completions

Both routes give you Cencori's routing, security enforcement, logging, and cost tracking.

Official TypeScript SDK

Codetext
import { Cencori } from 'cencori';
 
const cencori = new Cencori({
  apiKey: process.env.CENCORI_API_KEY,
});
 
const response = await cencori.ai.chat({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is the capital of France?' },
  ],
  temperature: 0.2,
  maxTokens: 300,
});
 
console.log(response.content);
console.log(response.toolCalls);
console.log(response.usage.totalTokens);

SDK Response Shape

Codetext
{
  "id": "chatcmpl_123",
  "model": "gpt-4o",
  "content": "The capital of France is Paris.",
  "toolCalls": null,
  "finishReason": "stop",
  "usage": {
    "promptTokens": 13,
    "completionTokens": 7,
    "totalTokens": 20
  }
}

Native Cencori HTTP Endpoint

Use the native endpoint when you want direct access to /api/ai/chat:

Codetext
curl https://cencori.com/api/ai/chat \
  -H "CENCORI_API_KEY: csk_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}],
    "stream": false
  }'

This endpoint returns the OpenAI-compatible choices[0].message shape and also includes Cencori convenience fields such as content, toolCalls, and cost_usd.

OpenAI-Compatible Endpoint

Use this when a client or framework already expects the OpenAI Chat Completions API:

Codetext
curl https://api.cencori.com/v1/chat/completions \
  -H "Authorization: Bearer csk_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Request Parameters

FieldTypeRequiredNotes
modelstringYesAny model routed through Cencori
messagesarrayYesConversation history
temperaturenumberNoSampling temperature
maxTokensnumberNoMax output tokens
streambooleanNoStream the response
toolsarrayNoFunction/tool definitions
toolChoicestring or objectNoTool selection mode
userIdstringNoEnd-user identifier for attribution

OpenAI-Compatible Response Shape

Codetext
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The capital of France is Paris."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 7,
    "total_tokens": 20
  }
}

Streaming

SDK Streaming

Codetext
const stream = cencori.ai.chatStream({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Tell me a story.' }],
});
 
for await (const chunk of stream) {
  process.stdout.write(chunk.delta);
}

HTTP Streaming

Codetext
curl -N https://api.cencori.com/v1/chat/completions \
  -H "Authorization: Bearer csk_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Tell me a story."}],
    "stream": true
  }'

Tool Calling

Codetext
const response = await cencori.ai.chat({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }],
  tools: [
    {
      type: 'function',
      function: {
        name: 'get_weather',
        description: 'Get weather for a location',
        parameters: {
          type: 'object',
          properties: {
            location: { type: 'string' },
          },
          required: ['location'],
        },
      },
    },
  ],
});
 
console.log(response.toolCalls);

Error Handling

Handle transient errors, security blocks, and rate limits in your application:

Codetext
try {
  await cencori.ai.chat({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Hello!' }],
  });
} catch (error) {
  console.error(error);
}