AI
Models
Last updated March 3, 2026
Browse all supported AI models available through Cencori, including their providers, types, and context windows.
Overview
Cencori provides unified access to 100+ state-of-the-art AI models from leading providers like OpenAI, Anthropic, Google, Mistral, xAI, DeepSeek, and more.
You can use any of these models with the same API format, simply by changing the model parameter.
Model Catalog
Claude 3.5 Haiku
claude-3-5-haiku-20241022
Fast and efficient
Claude 3.5 Sonnet
claude-3-5-sonnet-20241022
Balance of speed and capability
Claude 3.7 Sonnet
claude-3-7-sonnet
Hybrid reasoning model
Claude Haiku 4.5
claude-haiku-4.5
Fastest Claude model
Claude Opus 4
claude-opus-4
Most capable Claude model
Claude Opus 4.5
claude-opus-4.5
Latest, most intelligent
Claude Opus 4.6
claude-opus-4.6
Latest flagship, agentic coding record-breaker
Claude Sonnet 4
claude-sonnet-4
Balanced speed & capability
Claude Sonnet 4.5
claude-sonnet-4.5
Enhanced coding & agents
Claude Sonnet 4.6
claude-sonnet-4.6
Latest flagship, enhanced reasoning & coding
Command A
command-a-03-2025
Most performant, agentic tasks
Command Light
command-light
Fast and efficient
Command R
command-r
Balanced performance
Command R+
command-r-plus-08-2024
Complex RAG and multi-step
DeepSeek Coder V2
deepseek-coder-v2
338 languages, GPT-4 level
DeepSeek R1
deepseek-reasoner
Reasoning model
DeepSeek V3
deepseek-chat
128K context, MIT license
DeepSeek V3.1
deepseek-v3.1
Hybrid thinking modes
DeepSeek V3.2
deepseek-v3.2
GPT-5 level, daily driver
DeepSeek V3.2 Speciale
deepseek-v3.2-speciale
Maxed reasoning, competition gold
Gemini 2.0 Flash
gemini-2.0-flash
Fast model
Gemini 2.0 Flash Thinking
gemini-2.0-flash-thinking
Reasoning variant
Gemini 2.5 Flash
gemini-2.5-flash
Thinking capabilities
Gemini 2.5 Flash Lite
gemini-2.5-flash-lite
Speed optimized
Gemini 2.5 Pro
gemini-2.5-pro
Enhanced reasoning & coding
Gemini 3 Deep Think
gemini-3-deep-think
Deep iterative reasoning
Gemini 3 Flash
gemini-3-flash
Frontier speed & intelligence
Gemini 3 Pro
gemini-3-pro
Powerful Gemini model
Gemini 3 Pro Image
gemini-3-pro-image
Fast photorealism
Gemini 3.1 Flash Image (Nano Banana 2)
gemini-3.1-flash-image
Reasoning-guided image synthesis, up to 4K
Gemini 3.1 Pro (Custom Tools)
gemini-3.1-pro-preview-customtools
Optimized for custom tools and bash
Gemini 3.1 Pro Preview
gemini-3.1-pro-preview
Latest flagship preview, 1M context, enhanced reasoning
Imagen 3
imagen-3
High quality images
Llama 3.1 8B Instant
llama-3.1-8b-instant
Ultra-fast inference
Llama 3.3 70B Versatile
llama-3.3-70b-versatile
Groq-hosted versatile Llama 3.3 model
Llama 4 Maverick
llama-4-maverick
Latest multimodal Llama
Llama 4 Scout
llama-4-scout
Advanced Llama 4 model
Mixtral 8x7B
mixtral-8x7b-32768
MoE architecture
Llama 3.3 70B
meta-llama/Llama-3.3-70B-Instruct
Via HF Inference
Llama 4 Maverick
meta-llama/Llama-4-Maverick
Via HF Inference
Mistral Large 3
mistralai/Mistral-Large-3
Via HF Inference
Qwen 2.5 72B
Qwen/Qwen2.5-72B-Instruct
Via HF Inference
Llama 3.1 405B
llama-3.1-405b
Largest open model
Llama 3.1 70B
llama-3.1-70b
Balanced performance
Llama 3.2 90B Vision
llama-3.2-90b-vision
Multimodal understanding
Llama 3.3 70B
llama-3.3-70b
Latest Llama 3 model
Llama 4 Maverick
llama-4-maverick
Latest multimodal flagship
Llama 4 Scout
llama-4-scout
Advanced reasoning
Codestral 25.01
codestral-latest
2.5x faster code generation
Devstral 2
devstral-latest
Frontier code agents
Magistral Medium
magistral-medium
Multimodal reasoning
Ministral 3B
ministral-3b
Compact edge model
Ministral 8B
ministral-8b
Small efficient model
Mistral Large 3
mistral-large-latest
675B params, best open-weight multimodal
Mistral Medium 3.1
mistral-medium-latest
Frontier-class multimodal
Mistral Small 3
mistral-small-latest
24B params, fast
DALL-E 2
dall-e-2
Fast image generation
DALL-E 3
dall-e-3
High quality images
GPT Image 1
gpt-image-1
ChatGPT image generation model
GPT Image 1.5
gpt-image-1.5
Best text rendering
GPT-4 Turbo
gpt-4-turbo
Legacy GPT-4 model
GPT-4.1
gpt-4.1
Long-context GPT-4.1
GPT-4.1 Mini
gpt-4.1-mini
Balanced GPT-4.1 model
GPT-4.1 Nano
gpt-4.1-nano
Fast GPT-4.1 nano model
GPT-4o
gpt-4o
Omni-modal model
GPT-4o Mini
gpt-4o-mini
Fast and cost-effective
GPT-5
gpt-5
Flagship model
GPT-5 Mini
gpt-5-mini
Fast and efficient
GPT-5 Nano
gpt-5-nano
Lowest-latency GPT-5 model
GPT-5 Pro
gpt-5-pro
High-quality GPT-5 variant
GPT-5.1
gpt-5.1
Improved GPT-5 generation
GPT-5.2
gpt-5.2
Latest GPT-5.2 flagship
GPT-5.2 Pro
gpt-5.2-pro
Most capable GPT-5.2 variant
o1
o1
Legacy reasoning model
o3
o3
Advanced reasoning model
o3 Mini
o3-mini
Fast reasoning model
o3 Pro
o3-pro
Most advanced reasoning model
o4 Mini
o4-mini
Successor to o1-mini
Claude Opus 4.5 (via OpenRouter)
anthropic/claude-opus-4.5
Unified billing
Gemini 3 Pro (via OpenRouter)
google/gemini-3-pro
Meta-provider
GPT-5 (via OpenRouter)
openai/gpt-5
Access any model
Grok 4 (via OpenRouter)
x-ai/grok-4
Access xAI models
Sonar
sonar
Default web-connected
Sonar Large Online
llama-3.1-sonar-large-128k-online
Web-connected search
Sonar Pro
sonar-pro
Enhanced search, richer context
Sonar Reasoning Pro
sonar-reasoning-pro
Deep inference & research
Qwen 2.5 32B
qwen2.5-32b-instruct
Balanced performance
Qwen 2.5 72B
qwen2.5-72b-instruct
Flagship model
Qwen 2.5 Coder 32B
qwen2.5-coder-32b
Code specialized
QwQ 32B
qwq-32b-preview
Reasoning model
DeepSeek V3.1
deepseek-ai/DeepSeek-V3.1
Hybrid reasoning
Llama 3.3 70B Turbo
meta-llama/Llama-3.3-70B-Instruct-Turbo
Fast Llama inference
Llama 4 Maverick
meta-llama/Llama-4-Maverick
Latest Llama
Qwen 2.5 72B
Qwen/Qwen2.5-72B-Instruct-Turbo
Alibaba flagship
Grok 3
grok-3
DeepSearch, Big Brain Mode
Grok 3 Mini
grok-3-mini
Fast responses
Grok 4
grok-4
Enhanced reasoning, real-time search
Grok 4 Heavy
grok-4-heavy
Maximum capability
Grok 4.1
grok-4.1
Improved multimodal & reasoning
Grok 4.1 Fast
grok-4.1-fast
Best agentic tool calling
Grok Code Fast
grok-code-fast-1
Fast agentic coding
Model | Type |
|---|---|
Claude 3.5 Haiku claude-3-5-haiku-20241022 | chat |
Claude 3.5 Sonnet claude-3-5-sonnet-20241022 | chat |
Claude 3.7 Sonnet claude-3-7-sonnet | reasoning |
Claude Haiku 4.5 claude-haiku-4.5 | chat |
Claude Opus 4 claude-opus-4 | chat |
Claude Opus 4.5 claude-opus-4.5 | chat |
Claude Opus 4.6 claude-opus-4.6 | chat |
Claude Sonnet 4 claude-sonnet-4 | chat |
Claude Sonnet 4.5 claude-sonnet-4.5 | chat |
Claude Sonnet 4.6 claude-sonnet-4.6 | chat |
Command A command-a-03-2025 | chat |
Command Light command-light | chat |
Command R command-r | chat |
Command R+ command-r-plus-08-2024 | chat |
DeepSeek Coder V2 deepseek-coder-v2 | code |
DeepSeek R1 deepseek-reasoner | reasoning |
DeepSeek V3 deepseek-chat | chat |
DeepSeek V3.1 deepseek-v3.1 | chat |
DeepSeek V3.2 deepseek-v3.2 | chat |
DeepSeek V3.2 Speciale deepseek-v3.2-speciale | reasoning |
Gemini 2.0 Flash gemini-2.0-flash | chat |
Gemini 2.0 Flash Thinking gemini-2.0-flash-thinking | reasoning |
Gemini 2.5 Flash gemini-2.5-flash | chat |
Gemini 2.5 Flash Lite gemini-2.5-flash-lite | chat |
Gemini 2.5 Pro gemini-2.5-pro | chat |
Gemini 3 Deep Think gemini-3-deep-think | reasoning |
Gemini 3 Flash gemini-3-flash | chat |
Gemini 3 Pro gemini-3-pro | chat |
Gemini 3 Pro Image gemini-3-pro-image | image |
Gemini 3.1 Flash Image (Nano Banana 2) gemini-3.1-flash-image | image |
Gemini 3.1 Pro (Custom Tools) gemini-3.1-pro-preview-customtools | chat |
Gemini 3.1 Pro Preview gemini-3.1-pro-preview | chat |
Imagen 3 imagen-3 | image |
Llama 3.1 8B Instant llama-3.1-8b-instant | chat |
Llama 3.3 70B Versatile llama-3.3-70b-versatile | chat |
Llama 4 Maverick llama-4-maverick | chat |
Llama 4 Scout llama-4-scout | chat |
Mixtral 8x7B mixtral-8x7b-32768 | chat |
Llama 3.3 70B meta-llama/Llama-3.3-70B-Instruct | chat |
Llama 4 Maverick meta-llama/Llama-4-Maverick | chat |
Mistral Large 3 mistralai/Mistral-Large-3 | chat |
Qwen 2.5 72B Qwen/Qwen2.5-72B-Instruct | chat |
Llama 3.1 405B llama-3.1-405b | chat |
Llama 3.1 70B llama-3.1-70b | chat |
Llama 3.2 90B Vision llama-3.2-90b-vision | chat |
Llama 3.3 70B llama-3.3-70b | chat |
Llama 4 Maverick llama-4-maverick | chat |
Llama 4 Scout llama-4-scout | chat |
Codestral 25.01 codestral-latest | code |
Devstral 2 devstral-latest | code |
Magistral Medium magistral-medium | reasoning |
Ministral 3B ministral-3b | chat |
Ministral 8B ministral-8b | chat |
Mistral Large 3 mistral-large-latest | chat |
Mistral Medium 3.1 mistral-medium-latest | chat |
Mistral Small 3 mistral-small-latest | chat |
DALL-E 2 dall-e-2 | image |
DALL-E 3 dall-e-3 | image |
GPT Image 1 gpt-image-1 | image |
GPT Image 1.5 gpt-image-1.5 | image |
GPT-4 Turbo gpt-4-turbo | chat |
GPT-4.1 gpt-4.1 | code |
GPT-4.1 Mini gpt-4.1-mini | chat |
GPT-4.1 Nano gpt-4.1-nano | chat |
GPT-4o gpt-4o | chat |
GPT-4o Mini gpt-4o-mini | chat |
GPT-5 gpt-5 | chat |
GPT-5 Mini gpt-5-mini | chat |
GPT-5 Nano gpt-5-nano | chat |
GPT-5 Pro gpt-5-pro | chat |
GPT-5.1 gpt-5.1 | chat |
GPT-5.2 gpt-5.2 | chat |
GPT-5.2 Pro gpt-5.2-pro | chat |
o1 o1 | reasoning |
o3 o3 | reasoning |
o3 Mini o3-mini | reasoning |
o3 Pro o3-pro | reasoning |
o4 Mini o4-mini | reasoning |
Claude Opus 4.5 (via OpenRouter) anthropic/claude-opus-4.5 | chat |
Gemini 3 Pro (via OpenRouter) google/gemini-3-pro | chat |
GPT-5 (via OpenRouter) openai/gpt-5 | chat |
Grok 4 (via OpenRouter) x-ai/grok-4 | chat |
Sonar sonar | search |
Sonar Large Online llama-3.1-sonar-large-128k-online | search |
Sonar Pro sonar-pro | search |
Sonar Reasoning Pro sonar-reasoning-pro | reasoning |
Qwen 2.5 32B qwen2.5-32b-instruct | chat |
Qwen 2.5 72B qwen2.5-72b-instruct | chat |
Qwen 2.5 Coder 32B qwen2.5-coder-32b | code |
QwQ 32B qwq-32b-preview | reasoning |
DeepSeek V3.1 deepseek-ai/DeepSeek-V3.1 | chat |
Llama 3.3 70B Turbo meta-llama/Llama-3.3-70B-Instruct-Turbo | chat |
Llama 4 Maverick meta-llama/Llama-4-Maverick | chat |
Qwen 2.5 72B Qwen/Qwen2.5-72B-Instruct-Turbo | chat |
Grok 3 grok-3 | chat |
Grok 3 Mini grok-3-mini | chat |
Grok 4 grok-4 | chat |
Grok 4 Heavy grok-4-heavy | chat |
Grok 4.1 grok-4.1 | chat |
Grok 4.1 Fast grok-4.1-fast | chat |
Grok Code Fast grok-code-fast-1 | code |
Usage
To use a model, pass its ID to any of the AI SDK methods:
import { cencori } from '@/lib/cencori'
const result = await cencori.ai.chat({
model: 'gpt-4o', // Use the ID from the catalog above
messages: [
{ role: 'user', content: 'Hello!' }
]
})