AI
TanStack AI
Last updated April 17, 2026
Use Cencori's TanStack adapter for streaming and model routing in TanStack AI.
Cencori ships a first-party adapter for @tanstack/ai. Use it when you want TanStack's framework-agnostic AI primitives with Cencori's routing, security, and observability.
Installation
npm install cencori @tanstack/aiDefault Adapter
import { cencori } from 'cencori/tanstack';
const adapter = cencori('gpt-4o');The default adapter reads CENCORI_API_KEY from the environment.
Custom Provider
import { createCencori } from 'cencori/tanstack';
const provider = createCencori({
apiKey: process.env.CENCORI_API_KEY!,
});
const adapter = provider('gpt-4o');Basic Streaming
import { chat } from '@tanstack/ai';
import { cencori } from 'cencori/tanstack';
for await (const chunk of chat({
adapter: cencori('gpt-4o'),
messages: [{ role: 'user', content: 'Write a haiku about coding.' }],
})) {
if (chunk.type === 'content') {
process.stdout.write(chunk.delta);
}
}Switching Models
await chat({
adapter: cencori('gpt-4o'),
messages,
});
await chat({
adapter: cencori('claude-sonnet-4.5'),
messages,
});
await chat({
adapter: cencori('gemini-2.5-flash'),
messages,
});Structured Output And Tools
The adapter forwards tool definitions and structured output requests through Cencori, so TanStack AI workflows still benefit from gateway policies and logging.