Getting Started
Making Your First Request
Last updated March 3, 2026
Send your first AI request through Cencori.
Learn how to send your first AI request through Cencori.
Prerequisites
- Cencori account with API key
- Node.js 18+ installed
- SDK installed (
npm install cencori)
Basic Chat Request
import { Cencori } from 'cencori';
const cencori = new Cencori({
apiKey: 'csk_live_...'
});
async function main() {
const response = await cencori.ai.chat({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' }
]
});
console.log(response.content);
// "The capital of France is Paris."
console.log(response.usage);
// { promptTokens: 20, completionTokens: 10, totalTokens: 30 }
}
main();Streaming Response
const stream = cencori.ai.chatStream({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Tell me a joke' }]
});
for await (const chunk of stream) {
process.stdout.write(chunk.delta);
}Using Different Providers
Switch providers by changing the model:
// OpenAI
await cencori.ai.chat({ model: 'gpt-4o', messages: [...] });
// Anthropic
await cencori.ai.chat({ model: 'claude-opus-4', messages: [...] });
// Google
await cencori.ai.chat({ model: 'gemini-2.5-flash', messages: [...] });View in Dashboard
After making requests, view them in your dashboard:
- Navigate to the Logs tab
- See full request/response details
- Check token usage and costs
- View security scan results
Next Steps
- AI Gateway - Multi-provider routing
- Streaming - Real-time responses
- Tool Calling - Let AI call functions