Multi-Provider Support
Use OpenAI, Anthropic, and Google Gemini through one unified API. Switch providers with a single parameter - no code changes required.
Overview
One of Cencori's most powerful features is multi-provider support. Instead of being locked into one AI provider, you can use multiple providers through a single, unified API.
This gives you:
- No vendor lock-in: Switch providers anytime
- Cost optimization: Use the cheapest model for each task
- Reliability: Fallback to another provider if one is down
- Quality comparison: A/B test different models
Supported Providers
| Provider | Models | Streaming | Status |
|---|---|---|---|
| OpenAI | GPT-4, GPT-4 Turbo, GPT-4o, GPT-3.5 | YES | Production |
| Anthropic | Claude 3 Opus, Sonnet, Haiku | YES | Production |
| Gemini 2.5 Flash, 2.0 Flash | YES | Production | |
| Custom | Any OpenAI/Anthropic compatible | YES | Production |
How to Switch Providers
Switching providers is as simple as changing the model parameter:
Note: The response format is identical regardless of provider, making it easy to switch without changing your application code.
Model Selection Strategy
Different models excel at different tasks. Here's how to choose:
For Maximum Quality
Use gpt-4o or claude-3-opus for complex reasoning, code generation, and critical tasks.
For Speed and Cost
Use gemini-2.5-flash or gpt-3.5-turbo for simple tasks, high-volume applications, and real-time chat.
For Long Context
Use claude-3-opus (200K tokens) or gpt-4-turbo (128K tokens) for document analysis and long conversations.
For Balanced Performance
Use claude-3-sonnet for a good balance of quality, speed, and cost.
Cost Comparison
Different providers have different pricing. See the full breakdown in the Models Reference.
| Model | Provider | Cost/1M tokens | Best For |
|---|---|---|---|
| gemini-2.5-flash | ~$0.50 | Speed & Cost | |
| gpt-3.5-turbo | OpenAI | ~$1.50 | Speed |
| claude-3-haiku | Anthropic | ~$2.50 | Balance |
| gpt-4o | OpenAI | ~$15.00 | Quality |
| claude-3-opus | Anthropic | ~$75.00 | Max Quality |
Tip: Cencori shows you the exact cost for each request in your dashboard, making it easy to optimize spending.
Dynamic Provider Selection
You can programmatically choose providers based on user tier, task complexity, or cost budget:
Provider-Specific Features
While Cencori normalizes the API, some providers have unique capabilities:
OpenAI
- Function calling
- JSON mode
- Vision support (GPT-4 Vision)
Anthropic
- 200K+ token context window
- System message separation
- Strong instruction following
Google Gemini
- Extremely fast responses
- Low cost
- Multimodal support
Fallback Strategy (Coming Soon)
Cencori will soon support automatic fallback: if one provider is down or rate-limited, your request automatically routes to a backup provider.
Roadmap: Automatic fallback, model routing based on latency, and cost-optimized selection coming in Q2 2026.

