Docs/AI SDK

AI

Failover

Last updated April 17, 2026

How Cencori handles retries, fallback routing, and circuit breaking at the gateway layer.

Cencori handles failover in the gateway, not through per-request SDK flags. When a provider or model path is degraded, Cencori can retry, route to a fallback path, and open a circuit breaker to protect future traffic.

What Happens Automatically

When a supported request fails, Cencori can:

  1. retry transient upstream failures
  2. fall back to another provider or equivalent route
  3. stop sending traffic to a failing path until it recovers

SDK Usage

You do not need special failover options in the TypeScript SDK:

Codetext
import { Cencori } from 'cencori';
 
const cencori = new Cencori({
  apiKey: process.env.CENCORI_API_KEY,
});
 
const response = await cencori.ai.chat({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
});
 
console.log(response.model);

OpenAI-Compatible Usage

The same applies if you use the OpenAI-compatible endpoint:

Codetext
curl https://api.cencori.com/v1/chat/completions \
  -H "Authorization: Bearer csk_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

What Cencori Retries

Error TypeRetryFallback
429 / rate limitYesYes
5xx / upstream server errorYesYes
upstream timeoutYesYes
401 / invalid provider credentialsNoNo
400 / invalid requestNoNo

Monitoring Failover

Use the Cencori dashboard and request logs to inspect:

  • fallback events
  • provider reliability
  • latency spikes
  • circuit breaker activity

Failover is most useful when your application already sends every model call through Cencori. That makes Cencori the single control point for resilience.