navil vs Portkey: Security Gateway Comparison
navil vs Portkey: Security Gateway Compared
Portkey and navil both sit in the data path of AI agent infrastructure, but they solve different problems. Portkey is an AI API gateway — routing, caching, retry logic, and observability for LLM calls. navil is a security proxy for MCP servers — policy enforcement, threat detection, and runtime governance for the tools your agents call.
This comparison covers what each does, where they overlap, and what you need when.
What Portkey Does
Portkey intercepts requests between your application and LLM providers (OpenAI, Anthropic, Google, etc.). Its feature set includes:
- Multi-provider routing — failover and load balancing across OpenAI, Anthropic, Bedrock
- Caching — deduplicate identical prompts to reduce costs
- Usage analytics — token spend dashboards per model, per project, per user
- Guardrails — input/output validation for PII, toxicity, JSON schema
- Logging — audit trail of every LLM request and response
Portkey lives between your backend and the LLM API. It does not see what happens after the LLM generates a tool call.
What navil Does
navil lives between your AI agent and its MCP servers. It intercepts every tool invocation — file reads, code writes, database queries, API calls, git operations — and enforces policy before the tool executes.
- Policy enforcement — YAML-defined scoping. Each agent sees only the tools its policy permits
- Threat detection — 568+ patterns across 11 attack classes: prompt injection, data exfiltration, privilege escalation, chain-of-thought leakage, autonomous drift
- Anomaly detection — 12 statistical detectors that flag deviations from each agent's behavioral baseline
- CI/CD integration — SARIF reports, security scoring, build gates
- Fleet governance — cloud dashboard for multi-agent policy management, audit logs, webhook alerts
navil does not route LLM requests. It monitors and controls what agents do with the tools they're given access to.
Where They Differ
| Dimension | Portkey | navil | |---|---|---| | Data plane | LLM API requests (prompt + completion) | MCP tool calls (invocations + arguments) | | Primary job | Routing, caching, analytics | Security enforcement, threat detection | | Policy model | Input/output guardrails | Per-agent tool scoping, allow/deny lists | | Threat detection | PII, toxicity, JSON schema validation | 11 attack classes, behavioral anomaly detection | | Protocol | OpenAI/Anthropic REST, SSE | MCP (JSON-RPC over stdio/SSE) | | Open source | No | Apache 2.0 core | | Pricing | Free tier, then $89+/mo | Free, then $59/seat |
When You Need Both
If you use Portkey for LLM gateway and your agents call MCP tools, you have two exposure vectors:
-
Prompt engineering attacks — injection in the prompt that causes the LLM to generate tool calls with malicious arguments. Portkey's guardrails can catch PII in prompts but not crafted injection payloads designed to trigger specific tool behaviors.
-
Post-generation attacks — the LLM generates a valid-looking tool call, but the tool itself accesses a sensitive file, makes an unauthorized API call, or exfiltrates data. Portkey cannot see this — the tool runs after the LLM response is already delivered.
navil catches both vectors: injection payloads show up as anomalous tool calls, and unauthorized tool accesses are blocked by policy. Portkey and navil complement each other — Portkey protects the prompt layer, navil protects the action layer.
When You Only Need One
Portkey alone works if your application only calls LLMs and never uses MCP tools, function calling to local servers, or agent toolchains. Simple chatbots, single-turn Q&A, content generation pipelines.
navil alone works if your agents already have an LLM routing solution (or hit providers directly) and your risk is at the tool layer. Most teams using Claude Code, Cursor AI agents, or autonomous agent frameworks fall here.
Architecture Summary
┌─────────┐ ┌───────────┐ ┌────────┐ ┌───────────┐ ┌───────┐
│ Agent │────>│ Portkey │────>│ LLM │────>│ navil │────>│ MCP │
│ (IDE) │ │ Gateway │ │ API │ │ Security │ │ Server│
│ │ │ (prompts) │ │ │ │ Proxy │ │ (tools)│
└─────────┘ └───────────┘ └────────┘ └───────────┘ └───────┘
↑ ↑ ↑
Prompt caching Response Tool calls
& routing & completion validated
Analytics generated & enforced
Input guardrails |
└───────────>└───────────┘
Both gateways solve real problems. Portkey optimizes your LLM spend. navil secures your agent actions. The MCP ecosystem is maturing fast — teams that only guard the prompt layer are missing the attack surface where the real data lives.
Want to go further?
- MCP Security Checklist — Free 15-question readiness assessment
- Features — Full policy language reference
- Quickstart — Get set up in under 5 minutes
- Pricing — Free tier included
Enforce policy on every tool call
Navil wraps your MCP servers in under 60 seconds — no changes to agent code. 568 detection patterns, 2.7 µs overhead.