Give agents actions.Keep keys out.
A local runtime and SDK that lets AI agents request named tools safely instead of calling APIs directly.
Tools live in YAML. Policies live in YAML. Execution happens locally. The runtime enforces policy before the call. Secrets are used internally and never reach the agent.
A request becomes execution only after the runtime resolves the tool and applies policy.
The runtime decides before any local execution or outbound API call happens.
Approved requests execute locally, use internal secrets, call the API, and create an audit record.
Denied requests stop at policy. No local execution. No outbound API call. No secret exposure.
Stop giving the model
raw reach.
The key change is architectural, not prompt-level. Move the model away from direct API access and put a runtime boundary in the middle.
Model holds the key
The agent can call the API directly, carry secrets into prompts, and blur the line between reasoning and execution.
Model asks for an outcome
The agent requests a named tool, policy decides first, and the runtime executes locally only when the action is allowed.
API keys turn the model into the control plane.
That is the wrong boundary. If the model holds the key, the risky part already happened.
Agents are still handed raw API keys or allowed to fetch secrets on demand.
That makes the model a network client with broad reach and weak boundaries.
Once the key is inside the agent loop, prompts are not a real control surface.
From API access to action control.
The agent requests a tool. The runtime decides whether that action is allowed.
A local runtime that sits between the agent and the API.
KeyRunner is the enforcement layer. The model never touches the secret. The runtime uses it only when policy allows.
KeyRunner Runtime sits locally between the agent and the external system.
Tools live in YAML. Policies live in YAML. Secrets stay inside the runtime.
Policy is checked first, execution happens locally, the API call runs internally, and the action is audited.
Small config. Clear control.
Define tools once, define policy once, then let the runtime enforce the boundary.
tool: refund_paymentdescription: Refund a Stripe paymentinput: payment_id: string amount_cents: numberrun: type: http method: POST url: https://api.stripe.com/v1/refunds auth: secret: STRIPE_API_KEY body: payment_intent: ${payment_id} amount: ${amount_cents}policy: support_refundsallow: - tool: refund_payment when: max_amount_cents: 5000 require_reason: trueimport { KeyRunner } from "@keyrunner/runtime";const runtime = new KeyRunner({ tools: "./tools", policies: "./policies",});const result = await runtime.execute({ agent: "support-agent", tool: "refund_payment", input: { payment_id: "pi_123", amount_cents: 2500, },});$ keyrunner run \ --agent support-agent \ --tool refund_payment \ --input '{"payment_id":"pi_123","amount_cents":2500}'A narrower, safer execution surface.
The runtime reduces what the agent can touch and makes every allowed action easier to reason about.
No secrets in prompts
No secrets in tool output
Policy check before execution
Local runtime boundary
Audit trail by default
Simple YAML tool definitions
Not Vault. Not Okta.
Those solve adjacent identity and secret-management problems. KeyRunner solves agent action control at runtime.
Vault stores and brokers secrets. KeyRunner controls what the agent is allowed to do with them.
Okta handles identity and access. KeyRunner is the runtime guardrail for agent actions.
Real work, real limits.
Useful anywhere agents need to act without receiving raw credentials.
Refund a payment without exposing Stripe keys
Restart a service without exposing cloud credentials
Create CRM records through approved actions only
Trigger incident response workflows with full audit
Run internal ops tasks without raw SaaS access
KeyRunner is a local action gateway for AI agents.
Policy first. Execution second. Secrets hidden.
