Endpoints
Moderation Endpoint
Last updated March 3, 2026
Content safety filtering and policy compliance.
Check content for policy violations before or after AI generation.
Basic Request
const response = await cencori.ai.moderation({
input: 'Text to check for policy violations'
});
if (response.flagged) {
console.log('Content flagged:', response.categories);
}Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
input | string | string[] | Yes | Text to moderate |
model | string | No | 'text-moderation-latest' |
Response
{
id: 'modr-...',
model: 'text-moderation-latest',
results: [
{
flagged: false,
categories: {
hate: false,
'hate/threatening': false,
harassment: false,
'self-harm': false,
'self-harm/intent': false,
'self-harm/instructions': false,
sexual: false,
'sexual/minors': false,
violence: false,
'violence/graphic': false
},
categoryScores: {
hate: 0.00001,
harassment: 0.00002,
violence: 0.00003
// ...
}
}
]
}Categories
| Category | Description |
|---|---|
| hate | Hateful content |
| harassment | Harassing content |
| self-harm | Self-harm content |
| sexual | Sexual content |
| violence | Violent content |
HTTP API
curl -X POST https://cencori.com/api/ai/moderation \
-H "CENCORI_API_KEY: csk_..." \
-H "Content-Type: application/json" \
-d '{
"input": "Text to moderate"
}'Use Cases
Pre-Generation Filtering
async function safeGenerate(userMessage: string) {
const modResult = await cencori.ai.moderation({ input: userMessage });
if (modResult.results[0].flagged) {
return 'Sorry, I cannot process this request.';
}
return cencori.ai.chat({
model: 'gpt-4o',
messages: [{ role: 'user', content: userMessage }]
});
}Post-Generation Filtering
const response = await cencori.ai.chat({ ... });
const modResult = await cencori.ai.moderation({
input: response.content
});
if (modResult.results[0].flagged) {
return 'Response filtered for policy compliance.';
}
return response.content;