Endpoints
Chat Endpoint
Last updated March 3, 2026
Conversational AI with streaming, tool calling, and structured output.
The chat endpoint provides conversational AI capabilities with support for streaming, tool calling, and multiple providers.
Basic Request
const response = await cencori.ai.chat({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
]
});
console.log(response.content);Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model identifier |
messages | array | Yes | Conversation messages |
temperature | number | No | Randomness (0-2) |
maxTokens | number | No | Maximum output tokens |
stream | boolean | No | Enable streaming |
tools | array | No | Available functions |
Response
{
id: 'chatcmpl-...',
content: 'Hello! How can I help you today?',
model: 'gpt-4o',
finishReason: 'stop',
usage: {
promptTokens: 15,
completionTokens: 10,
totalTokens: 25
},
toolCalls: null
}Streaming
const stream = cencori.ai.chatStream({
model: 'claude-3-5-sonnet-latest',
messages: [{ role: 'user', content: 'Tell me a story' }]
});
for await (const chunk of stream) {
process.stdout.write(chunk.delta);
}HTTP API
curl -X POST https://cencori.com/api/ai/chat \
-H "CENCORI_API_KEY: csk_..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'Supported Models
| Provider | Models |
|---|---|
| OpenAI | gpt-5.2-pro, gpt-5.2, gpt-5.1, gpt-5-pro, gpt-5, gpt-5-mini, gpt-5-nano, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, gpt-4-turbo, o3-pro, o3, o3-mini, o4-mini, o1 |
| Anthropic | claude-opus-4.6, claude-opus-4.5, claude-sonnet-4.5, claude-opus-4, claude-sonnet-4, claude-haiku-4.5, claude-3-7-sonnet, claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022 |
| gemini-3-pro, gemini-3-flash, gemini-3-deep-think, gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite, gemini-2.0-flash | |
| xAI | grok-4.1, grok-4.1-fast, grok-4, grok-4-heavy, grok-3, grok-3-mini, grok-code-fast-1 |
| DeepSeek | deepseek-v3.2, deepseek-v3.2-speciale, deepseek-v3.1, deepseek-chat, deepseek-reasoner, deepseek-coder-v2 |
| Mistral | mistral-large-latest, mistral-medium-latest, mistral-small-latest, codestral-latest, devstral-latest, magistral-medium |
| Groq | llama-4-maverick, llama-4-scout, llama-3.3-70b-versatile, llama-3.1-8b-instant, mixtral-8x7b-32768 |
| Cohere | command-a-03-2025, command-r-plus-08-2024, command-r, command-light |
| Together | meta-llama/Llama-4-Maverick, meta-llama/Llama-3.3-70B-Instruct-Turbo, Qwen/Qwen2.5-72B-Instruct-Turbo |
| Perplexity | sonar-pro, sonar, sonar-reasoning-pro |
| OpenRouter | openai/gpt-5, anthropic/claude-opus-4.5, google/gemini-3-pro, x-ai/grok-4 |
| Qwen | qwen2.5-72b-instruct, qwen2.5-32b-instruct, qwen2.5-coder-32b, qwq-32b-preview |
| Meta | llama-4-maverick, llama-4-scout, llama-3.3-70b, llama-3.2-90b-vision, llama-3.1-405b |
| Hugging Face | meta-llama/Llama-4-Maverick, meta-llama/Llama-3.3-70B-Instruct, Qwen/Qwen2.5-72B-Instruct |