Docs/AI SDK

Endpoints

Embeddings Endpoint

Last updated March 3, 2026

Generate vector embeddings for semantic search and RAG.

Generate vector embeddings from text for semantic search, similarity matching, and RAG applications.

Basic Request

Codetext
const response = await cencori.ai.embeddings({
  model: 'text-embedding-3-small',
  input: 'Hello world'
});
 
console.log(response.embeddings[0]); // [0.1, 0.2, ...]

Batch Request

Codetext
const response = await cencori.ai.embeddings({
  model: 'text-embedding-3-small',
  input: [
    'First text to embed',
    'Second text to embed',
    'Third text to embed'
  ]
});
 
// response.embeddings is an array of vectors
console.log(response.embeddings.length); // 3

Request Parameters

ParameterTypeRequiredDescription
modelstringYesModel identifier
inputstring | string[]YesText to embed
dimensionsnumberNoOutput dimensions
encodingFormatstringNo'float' or 'base64'

Response

Codetext
{
  embeddings: [
    [0.1, 0.2, -0.3, ...],
    [0.4, -0.1, 0.2, ...]
  ],
  model: 'text-embedding-3-small',
  usage: {
    promptTokens: 15,
    totalTokens: 15
  }
}

Model Comparison

ModelProviderDimensionsBest For
text-embedding-3-smallOpenAI1536General purpose
text-embedding-3-largeOpenAI3072High accuracy
text-embedding-004Google768Multilingual
embed-english-v3.0Cohere1024English text

HTTP API

Codetext
curl -X POST https://cencori.com/api/ai/embeddings \
  -H "CENCORI_API_KEY: csk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "text-embedding-3-small",
    "input": "Hello world"
  }'

Use with Memory

Embeddings are automatically generated when storing memories:

Codetext
await cencori.memory.store({
  namespace: 'docs',
  content: 'Refund policy allows returns within 30 days'
  // Embedding generated automatically
});