NjiraAI
Safety and reliability infrastructure for tool-using AI agents.
The Problem
Agents fail at the action layer. They call the wrong API, pass malformed arguments, loop indefinitely, or take destructive actions with no human in the loop. There’s no inspection point between what an agent decides and what it actually does.
The Solution
NjiraAI is a governance proxy that sits between your agent and its tools. Every tool call passes through a real-time policy engine that can ALLOW, BLOCK, or MODIFY it before execution. You get control over agent behavior without rewriting your agent.
What we're building
- →Policy-gated proxy for agent tool calls — ALLOW / BLOCK / MODIFY in real time
- →Audit-grade traces with full request/response capture
- →Replay and simulation: test new policies against recorded sessions
- →Regression testing: detect behavioral drift across model or prompt changes
- →Loop detection and circuit-breaker protection
Example
DELETE /api/v1/users/all{"confirm": true}Who this is for
- →Teams deploying tool-using agents in production
- →Investors aligned with AI safety infrastructure
- →Engineers and researchers who want to work on this problem early
Roadmap & milestones
Interested?
We're looking for 2–3 design partners to run pilots against real agent stacks. If you're deploying agents and want better control over what they do, I'd like to talk.
Become a Design PartnerInvestors and prospective team members welcome. We’re early, small, and building fast.