Docs/AI SDK

AI

AI Overview

Last updated March 3, 2026

Build production-ready AI applications with a unified API, multi-provider routing, built-in security, and complete observability.

Cencori acts as a transparent proxy layer between your application and AI providers. Instead of calling OpenAI, Anthropic, or Google directly, you route requests through Cencori.

How It Works

Your Application → Cencori → AI Models

By routing through Cencori, you get:

  • Multi-Provider Routing - Switch between OpenAI, Anthropic, Google with a single parameter
  • Automatic Security - PII detection, prompt injection protection, content filtering
  • Complete Observability - Every request logged with full prompts, responses, and costs
  • Failover & Reliability - Automatic retries and provider fallback
  • Cost Tracking - Real-time usage and spend per project

Documentation

SectionDescription
AI GatewayThe secure proxy layer for all AI requests
Cencori SDKOfficial SDK for Node.js and TypeScript
Vercel AI SDKIntegration with Vercel AI SDK
TanStack AIFramework-agnostic adapter
ProvidersSupported providers and models
FailoverAutomatic retries and provider fallback

Endpoints

EndpointDescriptionProviders
ChatConversational AI with streamingOpenAI, Anthropic, Google, xAI, Mistral, DeepSeek
ImagesImage generation from textOpenAI (GPT Image, DALL-E), Google (Imagen)
EmbeddingsVector embeddings for RAGOpenAI, Google, Cohere
AudioSpeech-to-text and text-to-speechOpenAI (Whisper, TTS)
ModerationContent safety filteringOpenAI

AI Memory

Vector storage for RAG, conversation history, and semantic search. Store content with automatic embedding generation and retrieve it with natural language queries.

Quick Start

Codetext
import { Cencori } from 'cencori';
 
const cencori = new Cencori({ apiKey: 'csk_...' });
 
// Chat with any model
const response = await cencori.ai.chat({
  model: 'gpt-4o',  // or 'claude-opus-4', 'gemini-2.5-flash'
  messages: [{ role: 'user', content: 'Hello!' }]
});
 
console.log(response.content);