Deterministic control plane

Simon governs every episode, so AI agents stay factual, budgeted, and verifiable.

Local-first infrastructure that enforces goals, safe command usage, and evidence collection. Slack, fuzziness, and hallucinations are handled by the runtime, not by hopeful prompting.

View the code Read the README CLI · Bubble Tea · Local

Journey spotlight

The full experience, not just logs

Plan, guard, verify, archive, and share—the recording mirrors the entire journey so the site communicates how Simon governs AI sessions.

Governance & Security

Simon treats LLMs as workers and applies deterministic rules: budgets halt execution, commands are scoped, and evidence gates prevent hallucinated deliveries.

Architecture

Built in Go with Cobra + Bubble Tea, storage on SQLite + the local filesystem, and a plugin-ready guard/coach layer.

Deployment

The README outlines GitHub Pages for marketing and optionally a Homebrew tap for distribution.

Recordings

Every run is captured end-to-end, from the first plan to the final evidence check, so reviewers can replay the exact execution trail.

Governance stack

The README highlights Coach, Guard, and Runtime as the fundamental pillars, with MCP proxy and Memory providing context and safety.

Coach

Defines a goal, definition of done, and evidence list before the agent starts making API calls.

Guard

Applies hard budgets, command scoping, and verification gates so every tool call stays within policy.

Runtime

Episodes advance with rolling summaries so the session never collapses even during long interactions.

MCP Proxy

Digest tool outputs internally to reduce noise and protect secrets before the agent sees them.

Memory

Vectorized experience recall archives completed sessions for future reference.

CLI first

Local-first Go binary with a Bubble Tea TUI keeps you in control on every host.

Session lifecycle

Every run progresses through planning, iteration, verification, and archiving. There are no loose ends once the definition of done is satisfied and the evidence list is confirmed.

Plan the mission

Write the goal, definition of done, and evidence list in the YAML task file.

Run under guard

The CLI enforces budgets, intercepts tool calls, and streams status updates at each iteration.

Verify completion

Every piece of evidence is checked before the task is marked done and memory is archived.

Record the impact

Asciinema captures the terminal with chapters so the run can be replayed and reviewed with full context.

Best-in-class recordings

Recordings that prove the work

Simon doesn't just log output; it captures the journey in a replayable narrative. Reviewers can understand intent, guardrails, and evidence without hunting through terminal history.

Full-fidelity capture

The CLI session is recorded as it runs, preserving every command, guard decision, and runtime status update.

Episode-aware playback

Recordings are anchored to the same episode cadence as the runtime, so reviewers can jump to meaningful checkpoints fast.

Shareable by default

Asciinema cast files play anywhere and keep stakeholders aligned without shipping raw logs or screen recordings.

Recording performance

Designed to keep reviewers in flow while preserving the full execution trail.

Typical runtime

20-60s

Fast iterations with real AI providers, depending on task complexity.

Playback speed

1.1x

Optimized for clarity while preserving real execution timing.

Stakeholder handoff

Cast + notes

Share the recording alongside the outcome.

Journey timeline

Each stage of the session is visible on the marketing site so visitors understand how Simon shepherds work from start to finish.

Plan

Define the goal, definition of done, and evidence list in the YAML task so the session starts with clarity.

Execute

Simon runs the iterations under guard, enforcing budgets and summarizing progress every episode.

Verify

Evidence is collected, checked, and only then does the runtime mark the task complete.

Archive

Session memory is vectorized so future runs learn from the exact same journey.

Share

The cast recording pairs with the evidence summary so stakeholders replay the full journey, not just logs.

How to get started

Install Simon via Homebrew or build from source, configure your preferred provider, then run a task with the CLI.

# Install via Homebrew (recommended)
brew install felixgeelhaar/tap/simon

# Or build from source
git clone https://github.com/felixgeelhaar/simon.git
cd simon && go build -o simon cmd/simon/main.go

# Configure and run
simon config set openai.api_key your-api-key
simon run task.yaml --provider openai -i
				

The -i flag starts the interactive TUI for real-time execution visibility.

Session playback

The recording runs against real AI providers (OpenAI, Anthropic, Gemini, or Ollama), demonstrating actual latency, tool calls, and the full guard and verification workflow in production conditions.

Recording checklist

  • Goal + definition of done snapshot
  • Guard decisions and budget checkpoints
  • Evidence validation captured in-line

Built for power developers

Control the execution, not just the prompt

Simon is local-first, configurable, and deterministic. The documentation and recording show exactly what the CLI outputs— no vague marketing fluff.