[P] AgentGuard – a policy engine + proxy to control what AI agents are allowed to do
I’ve been seeing a trend where AI agents are getting more and more autonomy, running shell commands, calling APIs, even handling sensitive operations.
But most setups I’ve seen have basically no enforcement layer. It’s just “hope the agent behaves.”
So I built a project called AictionGuard.
It sits between the agent and the tools and enforces a policy before anything executes.
Some examples:
- Block commands like
rm -rf *before they run - Require approval for things like
sudoor production API calls - Log every action with reasoning + decision (audit trail)
- Define everything in a YAML policy file
Right now it’s early (alpha), but:
- Core policy engine is working
- HTTP proxy is implemented
- Python + TypeScript SDKs work
There are still gaps (no persistent DB, some features not wired yet), but the foundation is there, and I'm still working on the gaps, since i built the readme before the project itself.
I’d really appreciate:
- Feedback on the architecture
- Ideas for policy rules
- Contributors interested in AI safety / infra
Repo:
https://github.com/Caua-ferraz/AictionGuard
Curious, if you’re building or using agents, what’s the #1 thing you’d want to restrict or monitor?
[link] [comments]
Want to read more?
Check out the full article on the original site