AI: Real-time Request/Response Protection
Short: Real-time request/response protection and policy enforcement for AI calls.
What it Does
Cencori's AI feature acts as the inline proxy between your applications and Large Language Models (LLMs). It inspects, redacts, sanitizes, or blocks content in real-time, returning a structured verdict and a trace ID for every interaction.
Key Features
- Request & response interception (/v1/protect): Transparently intercepts all AI calls to apply security and compliance policies.
- Rule engine (keyword, regex, pattern, threshold): Configurable rules to detect and act on sensitive data, malicious inputs, or policy violations.
- Redaction & sanitization actions (masking, truncation, rewrite): Automatically modifies content to remove sensitive information or undesirable patterns.
- Per-tenant policies and sensitivity profiles: Tailor security and data handling policies to individual organizations or projects.
- Low-latency mode for production use: Optimized for high-performance, real-time protection in demanding production environments.
Who Uses It
This feature is ideal for developer teams, AI-first startups, and Small to Medium Businesses (SMBs) who need to secure their AI applications and ensure compliance.
Primary Integration
- SDK (TypeScript): Easy integration into your application logic.
- Edge middleware: Deploy protection at the edge for minimal latency.
- Simple proxy swap: Quick integration by replacing your existing LLM proxy.