AI Gateway
The control plane for autonomous AI agents.
OpenAI-compatible LLM routing, real-time guardrails, immutable audit trails, and built-in NIST AI RMF, AIUC-1, and FedRAMP-ready controls. Every agent decision governed, observable and compliant.
What it is
Every agent call routes through the AI Gateway
STRUCTURA's agents don't call LLM providers directly. They call the AI Gateway, which enforces guardrails, logs every interaction to an immutable audit store, and routes to the underlying provider (Anthropic, OpenAI, AWS Bedrock) based on your policy.
That one architectural decision is what makes autonomous agents safe to run in production. Without a gateway, you have a pile of agents calling external APIs with no governance, no audit trail, and no way to prove what happened when the auditor asks. With it, every agent action is a policy decision you can defend.
How it works
Query → Route → Inspect → Approve
Every agent request follows the same governed path. The AI Gateway receives the query, routes to the best provider, inspects the response against your guardrails, and logs the full exchange to an immutable audit trail.
How it works
Agent sends query
An agent submits a request to the AI Gateway. The gateway receives the prompt, validates the API key, resolves tenant-scoped guardrails, and begins the routing decision.
Gateway selects provider
Based on your routing policy, the gateway evaluates model aliases, cost budgets, and provider health to route the request to the optimal LLM — Claude, GPT, Llama, or Gemini.
Response inspected
The selected provider processes the request and returns a response. The gateway inspects the output against content filters, schema compliance, and hallucination detection rules.
Approved & logged
The response passes all guardrails and is delivered to the agent. Every step — request, routing decision, response, and policy check — is logged to the immutable audit trail.
Inbound request
"Analyze this Terraform plan for CIS benchmark violations..."
LLM providers
Claude
Anthropic
GPT
OpenAI
Llama
Meta
Gemini
Compliance built in
Designed for regulated AI from day one
Most LLM tools were built for experimentation and retrofitted for compliance. The AI Gateway was designed around the inverse: start with the audit, observability, and control requirements, then build the routing layer on top. If you need to prove your AI system is governed, you start from evidence, not justifications.
NIST AI RMF
Continuous evidence across the Govern, Map, Measure, and Manage functions. Every agent decision is traceable to a specific policy and outcome.
AIUC-1
Full control coverage across A001-E012, with pre-built evidence exports for independent assessors.
FedRAMP-ready
OSCAL-formatted control documentation and immutable audit logging designed for high-impact authorization boundaries.
EU AI Act
Risk categorization, human-oversight logging, and transparency records that map to high-risk AI system requirements.
Evidence you can export
OSCAL-formatted control documentation. Immutable audit logs with tamper-evident signatures. Pre-built evidence exports for NIST AI RMF, AIUC-1, and FedRAMP authorization packages. Your auditor's job becomes reading evidence, not chasing it.
Core capabilities
Everything a production AI control plane needs
OpenAI-compatible LLM routing
Drop-in replacement for the OpenAI API. Route across Anthropic, OpenAI, AWS Bedrock, and Vercel AI Gateway with model aliasing, fallback, and provider-failure isolation.
Real-time guardrail enforcement
In-flight inspection for content filtering, token budgets, rate limits, tool restrictions, hallucination detection, schema compliance, and LLM-evaluated custom policies.
Immutable audit trail
Every request, response, tool call, and decision logged to NATS JetStream and PostgreSQL. Designed to survive audit scrutiny for FedRAMP, AIUC-1, and NIST AI RMF.
Tenant-scoped governance
Per-tenant API keys, guardrail rules, rate limits, spend caps, and audit isolation. Multi-tenant safety at the database and query level.
Cost tracking and spend control
Per-token billing, per-team daily caps, and quota alerts. Langfuse integration for extended analytics on every agent run.
Model Context Protocol (MCP)
Every service exposes an MCP server. Agents can inspect rules, models, and usage as first-class context - not opaque calls.
Agent-to-agent (A2A) delegation
Agents delegate work to other agents with async task management and Redis-backed state, fully audited end-to-end.
OpenTelemetry + Langfuse observability
OTEL traces, Langfuse LLM analytics, and ClickHouse trace warehousing give you a single pane for every agent decision.
Relationship to the six agents
The nervous system of autonomous cloud operations
The AI Gateway isn't a separate product. It's the control plane that makes every agent's decisions safe, observable, and compliant. When the Terraform Agent reviews a plan, the Gateway logs the evidence. When the Security Agent flags a CIS violation, the Gateway records the policy check. When the Orchestrator sequences a multi-cloud deploy, every agent-to-agent call passes through the Gateway's A2A layer.
Meet the six agents
Terraform, Security, Network Validation, Network Digital Map, Orchestrator, and Architecture Reviewer. Each one governed by the AI Gateway.
See the agentsExplore real use cases
20+ concrete use cases the agents handle end-to-end from Terraform drift to cross-cloud deploy orchestration.
See use casesBuilt on open standards and proven infrastructure
The AI Gateway uses the Model Context Protocol, OpenTelemetry, and OSCAL - plus battle-tested open-source infrastructure for every layer of the stack.