Agent Frameworks
Cencori works with any AI agent framework out of the box. Just point your base URL to us and get full observability, failover, and security—automatically.
Why Use Cencori for Agents
AI agents make many LLM calls autonomously. Without proper infrastructure, you lose visibility into what they're doing, can't control costs, and risk outages when providers fail.
Cencori solves this by sitting between your agent and the LLM:
- Full observability: See every request, token count, and cost
- Automatic failover: If OpenAI is down, route to Anthropic
- Rate limiting: Prevent runaway agents from burning your budget
- Security scanning: Block prompt injection and unsafe outputs
- Multi-provider: Use any model from any provider
How It Works
Cencori exposes an OpenAI-compatible API. Agent frameworks like CrewAI, AutoGen, and LangChain already speak this protocol—they just need a different base_url.
Prerequisites: Before using Cencori with agent frameworks, you must add your provider API keys (OpenAI, Anthropic, etc.) in your Cencori project settings. Cencori routes requests to providers using your keys—we don't have our own models.
The Pattern: Set base_url to https://api.cencori.com/v1 and use your Cencori API key. Cencori handles auth, logging, and security, then forwards to your configured provider.
Supported Frameworks
| Framework | Language | Config Method | Status |
|---|---|---|---|
| CrewAI | Python | OPENAI_API_BASE env var | Works |
| AutoGen | Python | base_url in config | Works |
| LangChain | Python/JS | openai_api_base | Works |
| OmniCoreAgent | Python | base_url in model_config | Works |
| OpenAI SDK | Any | base_url parameter | Works |
CrewAI
CrewAI is a popular framework for building multi-agent systems. Configure it to use Cencori by setting environment variables:
AutoGen
Microsoft's AutoGen framework supports custom endpoints through the config:
LangChain
LangChain supports custom base URLs in the ChatOpenAI class:
For JavaScript/TypeScript:
OpenAI SDK (Direct)
Any code using the OpenAI SDK can be pointed to Cencori:
What You Get
Once your agent is routing through Cencori, you automatically get:
Full Observability
Every LLM call logged with tokens, cost, latency, and full request/response.
Cost Tracking
Real-time spend tracking per agent, per task, per user.
Automatic Failover
If OpenAI is down, automatically route to Anthropic or Gemini.
Security Scanning
Block prompt injection, PII leakage, and unsafe outputs.
Troubleshooting
401 Unauthorized
Make sure you're using your Cencori API key, not your OpenAI key. Get your key from the dashboard.
Model Not Found
Ensure you have the provider key configured in your project settings.
Streaming Issues
Cencori fully supports streaming. Make sure your framework is configured to use stream=True.

