# Trace: Convert and Enhance Video Skill Extension

Agent: pi | Model: GPT-5.3 Codex | Project: 2026-03-28-hermes-agent

---

## User

@skills/creative/manim-video/ lets load this skill into pi as an extension in ~/.pi/

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "ls -la"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/docs/extensions.md"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/README.md"
}
```

## Assistant

**Result** (success):
```
total 5656
drwxr-xr-x@ 21 mukund  staff      672 Mar 28 22:08 __pycache__
drwxr-xr-x@ 81 mukund  staff     2592 Apr 11 02:34 .
drwxr-xr-x@ 27 mukund  staff      864 Apr 11 00:40 ..
-rw-r--r--@  1 mukund  staff      112 Apr 11 02:34 .dockerignore
-rw-r--r--@  1 mukund  staff    12292 Apr  6 01:49 .DS_Store
-rw-r--r--@  1 mukund  staff    17082 Apr 11 02:34 .env.example
-rw-r--r--@  1 mukund  staff       10 Mar 28 22:03 .envrc
drwxr-xr-x@ 15 mukund  staff      480 Apr 11 02:34 .git
drwxr-xr-x@  6 mukund  staff      192 Apr  6 01:41 .github
-rw-r--r--@  1 mukund  staff      952 Mar 28 22:03 .gitignore
-rw-r--r--@  1 mukund  staff      106 Mar 28 22:03 .gitmodules
drwxr-xr-x@  4 mukund  staff      128 Mar 28 22:03 .plans
drwxr-xr-x@  6 mukund  staff      192 Mar 28 22:04 .pytest_cache
drwxr-xr-x@ 10 mukund  staff      320 Apr  6 01:41 .venv
drwxr-xr-x@ 12 mukund  staff      384 Apr 11 02:34 acp_adapter
drwxr-xr-x@  4 mukund  staff      128 Mar 28 22:03 acp_registry
drwxr-xr-x@ 31 mukund  staff      992 Apr 11 02:34 agent
-rw-r--r--@  1 mukund  staff    20799 Apr 11 02:34 AGENTS.md
drwxr-xr-x@  3 mukund  staff       96 Mar 28 22:03 assets
-rw-r--r--@  1 mukund  staff    55322 Apr 11 02:34 batch_runner.py
-rw-r--r--@  1 mukund  staff    44879 Apr 11 02:34 cli-config.yaml.example
-rw-r--r--@  1 mukund  staff   414240 Apr 11 02:34 cli.py
-rw-r--r--@  1 mukund  staff      407 Apr 11 02:34 constraints-termux.txt
-rw-r--r--@  1 mukund  staff    26908 Mar 28 22:03 CONTRIBUTING.md
drwxr-xr-x@  6 mukund  staff      192 Apr 11 02:34 cron
drwxr-xr-x@  6 mukund  staff      192 Mar 28 22:03 datagen-config-examples
drwxr-xr-x@  4 mukund  staff      128 Apr 11 02:34 docker
-rw-r--r--@  1 mukund  staff     1035 Apr 11 02:34 Dockerfile
drwxr-xr-x@  9 mukund  staff      288 Apr 11 02:34 docs
drwxr-xr-x@ 16 mukund  staff      512 Apr 11 02:34 environments
-rw-r--r--@  1 mukund  staff     4638 Apr 11 02:34 flake.lock
-rw-r--r--@  1 mukund  staff      924 Apr 11 02:34 flake.nix
drwxr-xr-x@ 20 mukund  staff      640 Apr 11 02:34 gateway
-rwxr-xr-x@  1 mukund  staff      262 Apr 11 02:34 hermes
drwxr-xr-x@  8 mukund  staff      256 Mar 28 22:06 hermes_agent.egg-info
drwxr-xr-x@ 44 mukund  staff     1408 Apr 11 02:34 hermes_cli
-rw-r--r--@  1 mukund  staff     6733 Apr 11 02:34 hermes_constants.py
-rw-r--r--@  1 mukund  staff     8598 Apr 11 02:34 hermes_logging.py
-rw-r--r--@  1 mukund  staff    49536 Apr 11 02:34 hermes_state.py
-rw-r--r--@  1 mukund  staff     3259 Apr 11 02:34 hermes_time.py
drwxr-xr-x@  3 mukund  staff       96 Apr 11 02:34 honcho_integration
drwxr-xr-x@ 13 mukund  staff      416 Mar 28 22:03 landingpage
-rw-r--r--@  1 mukund  staff     1070 Mar 28 22:03 LICENSE
drwxr-xr-x@  2 mukund  staff       64 Mar 28 22:03 logs
-rw-r--r--@  1 mukund  staff       87 Apr 11 02:34 MANIFEST.in
-rw-r--r--@  1 mukund  staff    30687 Apr 11 02:34 mcp_serve.py
-rw-r--r--@  1 mukund  staff    27406 Mar 28 22:03 mini_swe_runner.py
-rw-r--r--@  1 mukund  staff    22968 Apr 11 02:34 model_tools.py
drwxr-xr-x@  8 mukund  staff      256 Apr 11 02:34 nix
drwxr-xr-x@ 17 mukund  staff      544 Apr 11 02:34 optional-skills
-rw-r--r--@  1 mukund  staff   113695 Mar 28 22:03 package-lock.json
-rw-r--r--@  1 mukund  staff      739 Apr 11 02:34 package.json
drwxr-xr-x@  3 mukund  staff       96 Apr 11 02:34 packaging
drwxr-xr-x@  3 mukund  staff       96 Mar 28 22:03 plans
drwxr-xr-x@  5 mukund  staff      160 Apr 11 02:34 plugins
-rw-r--r--@  1 mukund  staff     4543 Apr 11 02:34 pyproject.toml
-rw-r--r--@  1 mukund  staff    10823 Apr 11 02:34 README.md
-rw-r--r--@  1 mukund  staff    32385 Mar 28 22:03 RELEASE_v0.2.0.md
-rw-r--r--@  1 mukund  staff    35724 Mar 28 22:03 RELEASE_v0.3.0.md
-rw-r--r--@  1 mukund  staff    45448 Mar 28 22:03 RELEASE_v0.4.0.md
-rw-r--r--@  1 mukund  staff    32765 Mar 28 22:03 RELEASE_v0.5.0.md
-rw-r--r--@  1 mukund  staff    26137 Apr 11 02:34 RELEASE_v0.6.0.md
-rw-r--r--@  1 mukund  staff    33995 Apr 11 02:34 RELEASE_v0.7.0.md
-rw-r--r--@  1 mukund  staff    36083 Apr 11 02:34 RELEASE_v0.8.0.md
-rw-r--r--@  1 mukund  staff      633 Apr 11 02:34 requirements.txt
-rw-r--r--@  1 mukund  staff    16286 Mar 28 22:03 rl_cli.py
-rw-r--r--@  1 mukund  staff   517852 Apr 11 02:34 run_agent.py
drwxr-xr-x@ 11 mukund  staff      352 Apr 11 02:34 scripts
-rwxr-xr-x@  1 mukund  staff    14753 Apr 11 02:34 setup-hermes.sh
drwxr-xr-x@ 29 mukund  staff      928 Apr 11 02:34 skills
drwxr-xr-x@  2 mukund  staff       64 Mar 28 22:03 temp_vision_images
drwxr-xr-x@ 49 mukund  staff     1568 Apr 11 02:34 tests
drwxr-xr-x@  2 mukund  staff       64 Mar 28 22:03 tinker-atropos
drwxr-xr-x@ 60 mukund  staff     1920 Apr 11 02:34 tools
-rw-r--r--@  1 mukund  staff    12332 Mar 28 22:03 toolset_distributions.py
-rw-r--r--@  1 mukund  staff    21038 Apr 11 02:34 toolsets.py
-rw-r--r--@  1 mukund  staff    62939 Apr 11 02:34 trajectory_compressor.py
-rw-r--r--@  1 mukund  staff     3866 Apr 11 02:34 utils.py
-rw-r--r--@  1 mukund  staff  1007837 Apr 11 02:34 uv.lock
drwxr-xr-x@ 10 mukund  staff      320 Apr  6 01:41 venv
drwxr-xr-x@ 14 mukund  staff      448 Apr 11 02:34 website
```

## Assistant

**Result** (success):
```
> pi can create extensions. Ask it to build one for your use case.

# Extensions

Extensions are TypeScript modules that extend pi's behavior. They can subscribe to lifecycle events, register custom tools callable by the LLM, add commands, and more.

> **Placement for /reload:** Put extensions in `~/.pi/agent/extensions/` (global) or `.pi/extensions/` (project-local) for auto-discovery. Use `pi -e ./path.ts` only for quick tests. Extensions in auto-discovered locations can be hot-reloaded with `/reload`.

**Key capabilities:**
- **Custom tools** - Register tools the LLM can call via `pi.registerTool()`
- **Event interception** - Block or modify tool calls, inject context, customize compaction
- **User interaction** - Prompt users via `ctx.ui` (select, confirm, input, notify)
- **Custom UI components** - Full TUI components with keyboard input via `ctx.ui.custom()` for complex interactions
- **Custom commands** - Register commands like `/mycommand` via `pi.registerCommand()`
- **Session persistence** - Store state that survives restarts via `pi.appendEntry()`
- **Custom rendering** - Control how tool calls/results and messages appear in TUI

**Example use cases:**
- Permission gates (confirm before `rm -rf`, `sudo`, etc.)
- Git checkpointing (stash at each turn, restore on branch)
- Path protection (block writes to `.env`, `node_modules/`)
- Custom compaction (summarize conversation your way)
- Conversation summaries (see `summarize.ts` example)
- Interactive tools (questions, wizards, custom dialogs)
- Stateful tools (todo lists, connection pools)
- External integrations (file watchers, webhooks, CI triggers)
- Games while you wait (see `snake.ts` example)

See [examples/extensions/](../examples/extensions/) for working implementations.

## Table of Contents

- [Quick Start](#quick-start)
- [Extension Locations](#extension-locations)
- [Available Imports](#available-imports)
- [Writing an Extension](#writing-an-extension)
  - [Extension Styles](#extension-styles)
- [Events](#events)
  - [Lifecycle Overview](#lifecycle-overview)
  - [Resource Events](#resource-events)
  - [Session Events](#session-events)
  - [Agent Events](#agent-events)
  - [Tool Events](#tool-events)
- [ExtensionContext](#extensioncontext)
- [ExtensionCommandContext](#extensioncommandcontext)
- [ExtensionAPI Methods](#extensionapi-methods)
- [State Management](#state-management)
- [Custom Tools](#custom-tools)
- [Custom UI](#custom-ui)
- [Error Handling](#error-handling)
- [Mode Behavior](#mode-behavior)
- [Examples Reference](#examples-reference)

## Quick Start

Create `~/.pi/agent/extensions/my-extension.ts`:

```typescript
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { Type } from "@sinclair/typebox";

export default function (pi: ExtensionAPI) {
  // React to events
  pi.on("session_start", async (_event, ctx) => {
    ctx.ui.notify("Extension loaded!", "info");
  });

  pi.on("tool_call", async (event, ctx) => {
    if (event.toolName === "bash" && event.input.command?.includes("rm -rf")) {
      const ok = await ctx.ui.confirm("Dangerous!", "Allow rm -rf?");
      if (!ok) return { block: true, reason: "Blocked by user" };
    }
  });

  // Register a custom tool
  pi.registerTool({
    name: "greet",
    label: "Greet",
    description: "Greet someone by name",
    parameters: Type.Object({
      name: Type.String({ description: "Name to greet" }),
    }),
    async execute(toolCallId, params, signal, onUpdate, ctx) {
      return {
        content: [{ type: "text", text: `Hello, ${params.name}!` }],
        details: {},
      };
    },
  });

  // Register a command
  pi.registerCommand("hello", {
    description: "Say hello",
    handler: async (args, ctx) => {
      ctx.ui.notify(`Hello ${args || "world"}!`, "info");
    },
  });
}
```

Test with `--extension` (or `-e`) flag:

```bash
pi -e ./my-extension.ts
```

## Extension Locations

> **Security:** Extensions run with your full system permissions and can execute arbitrary code. Only install from sources you trust.

Extensions are auto-discovered from:

| Location | Scope |
|----------|-------|
| `~/.pi/agent/extensions/*.ts` | Global (all projects) |
| `~/.pi/agent/extensions/*/index.ts` | Global (subdirectory) |
| `.pi/extensions/*.ts` | Project-local |
| `.pi/extensions/*/index.ts` | Project-local (subdirectory) |

Additional paths via `settings.json`:

```json
{
  "packages": [
    "npm:@foo/bar@1.0.0",
    "git:github.com/user/repo@v1"
  ],
  "extensions": [
    "/path/to/local/extension.ts",
    "/path/to/local/extension/dir"
  ]
}
```

To share extensions via npm or git as pi packages, see [packages.md](packages.md).

## Available Imports

| Package | Purpose |
|---------|---------|
| `@mariozechner/pi-coding-agent` | Extension types (`ExtensionAPI`, `ExtensionContext`, events) |
| `@sinclair/typebox` | Schema definitions for tool parameters |
| `@mariozechner/pi-ai` | AI utilities (`StringEnum` for Google-compatible enums) |
| `@mariozechner/pi-tui` | TUI components for custom rendering |

npm dependencies work too. Add a `package.json` next to your extension (or in a parent directory), run `npm install`, and imports from `node_modules/` are resolved automatically.

Node.js built-ins (`node:fs`, `node:path`, etc.) are also available.

## Writing an Extension

An extension exports a default function that receives `ExtensionAPI`:

```typescript
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";

export default function (pi: ExtensionAPI) {
  // Subscribe to events
  pi.on("event_name", async (event, ctx) => {
    // ctx.ui for user interaction
    const ok = await ctx.ui.confirm("Title", "Are you sure?");
    ctx.ui.notify("Done!", "success");
    ctx.ui.setStatus("my-ext", "Processing...");  // Footer status
    ctx.ui.setWidget("my-ext", ["Line 1", "Line 2"]);  // Widget above editor (default)
  });

  // Register tools, commands, shortcuts, flags
  pi.registerTool({ ... });
  pi.registerCommand("name", { ... });
  pi.registerShortcut("ctrl+x", { ... });
  pi.registerFlag("my-flag", { ... });
}
```

Extensions are loaded via [jiti](https://github.com/unjs/jiti), so TypeScript works without compilation.

### Extension Styles

**Single file** - simplest, for small extensions:

```
~/.pi/agent/extensions/
└── my-extension.ts
```

**Directory with index.ts** - for multi-file extensions:

```
~/.pi/agent/extensions/
└── my-extension/
    ├── index.ts        # Entry point (exports default function)
    ├── tools.ts        # Helper module
    └── utils.ts        # Helper module
```

**Package with dependencies** - for extensions that need npm packages:

```
~/.pi/agent/extensions/
└── my-extension/
    ├── package.json    # Declares dependencies and entry points
    ├── package-lock.json
    ├── node_modules/   # After npm install
    └── src/
        └── index.ts
```

```json
// package.json
{
  "name": "my-extension",
  "dependencies": {
    "zod": "^3.0.0",
    "chalk": "^5.0.0"
  },
  "pi": {
    "extensions": ["./src/index.ts"]
  }
}
```

Run `npm install` in the extension directory, then imports from `node_modules/` work automatically.

## Events

### Lifecycle Overview

```
pi starts
  │
  ├─► session_start { reason: "startup" }
  └─► resources_discover { reason: "startup" }
      │
      ▼
user sends prompt ─────────────────────────────────────────┐
  │                                                        │
  ├─► (extension commands checked first, bypass if found)  │
  ├─► input (can intercept, transform, or handle)          │
  ├─► (skill/template expansion if not handled)            │
  ├─► before_agent_start (can inject message, modify system prompt)
  ├─► agent_start                                          │
  ├─► message_start / message_update / message_end         │
  │                                                        │
  │   ┌─── turn (repeats while LLM calls tools) ───┐       │
  │   │                                            │       │
  │   ├─► turn_start                               │       │
  │   ├─► context (can modify messages)            │       │
  │   ├─► before_provider_request (can inspect or replace payload)
  │   │                                            │       │
  │   │   LLM responds, may call tools:            │       │
  │   │     ├─► tool_execution_start               │       │
  │   │     ├─► tool_call (can block)              │       │
  │   │     ├─► tool_execution_update              │       │
  │   │     ├─► tool_result (can modify)           │       │
  │   │     └─► tool_execution_end                 │       │
  │   │                                            │       │
  │   └─► turn_end                                 │       │
  │                                                        │
  └─► agent_end                                            │
                                                           │
user sends another prompt ◄────────────────────────────────┘

/new (new session) or /resume (switch session)
  ├─► session_before_switch (can cancel)
  ├─► session_shutdown
  ├─► session_start { reason: "new" | "resume", previousSessionFile? }
  └─► resources_discover { reason: "startup" }

/fork
  ├─► session_before_fork (can cancel)
  ├─► session_shutdown
  ├─► session_start { reason: "fork", previousSessionFile }
  └─► resources_discover { reason: "startup" }

/compact or auto-compaction
  ├─► session_before_compact (can cancel or customize)
  └─► session_compact

/tree navigation
  ├─► session_before_tree (can cancel or customize)
  └─► session_tree

/model or Ctrl+P (model selection/cycling)
  └─► model_select

exit (Ctrl+C, Ctrl+D)
  └─► session_shutdown
```

### Resource Events

#### resources_discover

Fired after `session_start` so extensions can contribute additional skill, prompt, and theme paths.
The startup path uses `reason: "startup"`. Reload uses `reason: "reload"`.

```typescript
pi.on("resources_discover", async (event, _ctx) => {
  // event.cwd - current working directory
  // event.reason - "startup" | "reload"
  return {
    skillPaths: ["/path/to/skills"],
    promptPaths: ["/path/to/prompts"],
    themePaths: ["/path/to/themes"],
  };
});
```

### Session Events

See [session.md](session.md) for session storage internals and the SessionManager API.

#### session_start

Fired when a session is started, loaded, or reloaded.

```typescript
pi.on("session_start", async (event, ctx) => {
  // event.reason - "startup" | "reload" | "new" | "resume" | "fork"
  // event.previousSessionFile - present for "new", "resume", and "fork"
  ctx.ui.notify(`Session: ${ctx.sessionManager.getSessionFile() ?? "ephemeral"}`, "info");
});
```

#### session_before_switch

Fired before starting a new session (`/new`) or switching sessions (`/resume`).

```typescript
pi.on("session_before_switch", async (event, ctx) => {
  // event.reason - "new" or "resume"
  // event.targetSessionFile - session we're switching to (only for "resume")

  if (event.reason === "new") {
    const ok = await ctx.ui.confirm("Clear?", "Delete all messages?");
    if (!ok) return { cancel: true };
  }
});
```

After a successful switch or new-session action, pi emits `session_shutdown` for the old extension instance, reloads and rebinds extensions for the new session, then emits `session_start` with `reason: "new" | "resume"` and `previousSessionFile`.
Do cleanup work in `session_shutdown`, then reestablish any in-memory state in `session_start`.

#### session_before_fork

Fired when forking via `/fork`.

```typescript
pi.on("session_before_fork", async (event, ctx) => {
  // event.entryId - ID of the entry being forked from
  return { cancel: true }; // Cancel fork
  // OR
  return { skipConversationRestore: true }; // Fork but don't rewind messages
});
```

After a successful fork, pi emits `session_shutdown` for the old extension instance, reloads and rebinds extensions for the new session, then emits `session_start` with `reason: "fork"` and `previousSessionFile`.
Do cleanup work in `session_shutdown`, then reestablish any in-memory state in `session_start`.

#### session_before_compact / session_compact

Fired on compaction. See [compaction.md](compaction.md) for details.

```typescript
pi.on("session_before_compact", async (event, ctx) => {
  const { preparation, branchEntries, customInstructions, signal } = event;

  // Cancel:
  return { cancel: true };

  // Custom summary:
  return {
    compaction: {
      summary: "...",
      firstKeptEntryId: preparation.firstKeptEntryId,
      tokensBefore: preparation.tokensBefore,
    }
  };
});

pi.on("session_compact", async (event, ctx) => {
  // event.compactionEntry - the saved compaction
  // event.fromExtension - whether extension provided it
});
```

#### session_before_tree / session_tree

Fired on `/tree` navigation. See [tree.md](tree.md) for tree navigation concepts.

```typescript
pi.on("session_before_tree", async (event, ctx) => {
  const { preparation, signal } = event;
  return { cancel: true };
  // OR provide custom summary:
  return { summary: { summary: "...", details: {} } };
});

pi.on("session_tree", async (event, ctx) => {
  // event.newLeafId, oldLeafId, summaryEntry, fromExtension
});
```

#### session_shutdown

Fired on exit (Ctrl+C, Ctrl+D, SIGTERM).

```typescript
pi.on("session_shutdown", async (_event, ctx) => {
  // Cleanup, save state, etc.
});
```

### Agent Events

#### before_agent_start

Fired after user submits prompt, before agent loop. Can inject a message and/or modify the system prompt.

```typescript
pi.on("before_agent_start", async (event, ctx) => {
  // event.prompt - user's prompt text
  // event.images - attached images (if any)
  // event.systemPrompt - current system prompt

  return {
    // Inject a persistent message (stored in session, sent to LLM)
    message: {
      customType: "my-extension",
      content: "Additional context for the LLM",
      display: true,
    },
    // Replace the system prompt for this turn (chained across extensions)
    systemPrompt: event.systemPrompt + "\n\nExtra instructions for this turn...",
  };
});
```

#### agent_start / agent_end

Fired once per user prompt.

```typescript
pi.on("agent_start", async (_event, ctx) => {});

pi.on("agent_end", async (event, ctx) => {
  // event.messages - messages from this prompt
});
```

#### turn_start / turn_end

Fired for each turn (one LLM response + tool calls).

```typescript
pi.on("turn_start", async (event, ctx) => {
  // event.turnIndex, event.timestamp
});

pi.on("turn_end", async (event, ctx) => {
  // event.turnIndex, event.message, event.toolResults
});
```

#### message_start / message_update / message_end

Fired for message lifecycle updates.

- `message_start` and `message_end` fire for user, assistant, and toolResult messages.
- `message_update` fires for assistant streaming updates.

```typescript
pi.on("message_start", async (event, ctx) => {
  // event.message
});

pi.on("message_update", async (event, ctx) => {
  // event.message
  // event.assistantMessageEvent (token-by-token stream event)
});

pi.on("message_end", async (event, ctx) => {
  // event.message
});
```

#### tool_execution_start / tool_execution_update / tool_execution_end

Fired for tool execution lifecycle updates.

In parallel tool mode:
- `tool_execution_start` is emitted in assistant source order during the preflight phase
- `tool_execution_update` events may interleave across tools
- `tool_execution_end` is emitted in assistant source order, matching final tool result message order

```typescript
pi.on("tool_execution_start", async (event, ctx) => {
  // event.toolCallId, event.toolName, event.args
});

pi.on("tool_execution_update", async (event, ctx) => {
  // event.toolCallId, event.toolName, event.args, event.partialResult
});

pi.on("tool_execution_end", async (event, ctx) => {
  // event.toolCallId, event.toolName, event.result, event.isError
});
```

#### context

Fired before each LLM call. Modify messages non-destructively. See [session.md](session.md) for message types.

```typescript
pi.on("context", async (event, ctx) => {
  // event.messages - deep copy, safe to modify
  const filtered = event.messages.filter(m => !shouldPrune(m));
  return { messages: filtered };
});
```

#### before_provider_request

Fired after the provider-specific payload is built, right before the request is sent. Handlers run in extension load order. Returning `undefined` keeps the payload unchanged. Returning any other value replaces the payload for later handlers and for the actual request.

```typescript
pi.on("before_provider_request", (event, ctx) => {
  console.log(JSON.stringify(event.payload, null, 2));

  // Optional: replace payload
  // return { ...event.payload, temperature: 0 };
});
```

This is mainly useful for debugging provider serialization and cache behavior.

### Model Events

#### model_select

Fired when the model changes via `/model` command, model cycling (`Ctrl+P`), or session restore.

```typescript
pi.on("model_select", async (event, ctx) => {
  // event.model - newly selected model
  // event.previousModel - previous model (undefined if first selection)
  // event.source - "set" | "cycle" | "restore"

  const prev = event.previousModel
    ? `${event.previousModel.provider}/${event.previousModel.id}`
    : "none";
  const next = `${event.model.provider}/${event.model.id}`;

  ctx.ui.notify(`Model changed (${event.source}): ${prev} -> ${next}`, "info");
});
```

Use this to update UI elements (status bars, footers) or perform model-specific initialization when the active model changes.

### Tool Events

#### tool_call

Fired after `tool_execution_start`, before the tool executes. **Can block.** Use `isToolCallEventType` to narrow and get typed inputs.

Before `tool_call` runs, pi waits for previously emitted Agent events to finish draining through `AgentSession`. This means `ctx.sessionManager` is up to date through the current assistant tool-calling message.

In the default parallel tool execution mode, sibling tool calls from the same assistant message are preflighted sequentially, then executed concurrently. `tool_call` is not guaranteed to see sibling tool results from that same assistant message in `ctx.sessionManager`.

`event.input` is mutable. Mutate it in place to patch tool arguments before execution.

Behavior guarantees:
- Mutations to `event.input` affect the actual tool execution
- Later `tool_call` handlers see mutations made by earlier handlers
- No re-validation is performed after your mutation
- Return values from `tool_call` only control blocking via `{ block: true, reason?: string }`

```typescript
import { isToolCallEventType } from "@mariozechner/pi-coding-agent";

pi.on("tool_call", async (event, ctx) => {
  // event.toolName - "bash", "read", "write", "edit", etc.
  // event.toolCallId
  // event.input - tool parameters (mutable)

  // Built-in tools: no type params needed
  if (isToolCallEventType("bash", event)) {
    // event.input is { command: string; timeout?: number }
    event.input.command = `source ~/.profile\n${event.input.command}`;

    if (event.input.command.includes("rm -rf")) {
      return { block: true, reason: "Dangerous command" };
    }
  }

  if (isToolCallEventType("read", event)) {
    // event.input is { path: string; offset?: number; limit?: number }
    console.log(`Reading: ${event.input.path}`);
  }
});
```

#### Typing custom tool input

Custom tools should export their input type:

```typescript
// my-extension.ts
export type MyToolInput = Static<typeof myToolSchema>;
```

Use `isToolCallEventType` with explicit type parameters:

```typescript
import { isToolCallEventType } from "@mariozechner/pi-coding-agent";
import type { MyToolInput } from "my-extension";

pi.on("tool_call", (event) => {
  if (isToolCallEventType<"my_tool", MyToolInput>("my_tool", event)) {
    event.input.action;  // typed
  }
});
```

#### tool_result

Fired after tool execution finishes and before `tool_execution_end` plus the final tool result message events are emitted. **Can modify result.**

`tool_result` handlers chain like middleware:
- Handlers run in extension load order
- Each handler sees the latest result after previous handler changes
- Handlers can return partial patches (`content`, `details`, or `isError`); omitted fields keep their current values

Use `ctx.signal` for nested async work inside the handler. This lets Esc cancel model calls, `fetch()`, and other abort-aware operations started by the extension.

```typescript
import { isBashToolResult } from "@mariozechner/pi-coding-agent";

pi.on("tool_result", async (event, ctx) => {
  // event.toolName, event.toolCallId, event.input
  // event.content, event.details, event.isError

  if (isBashToolResult(event)) {
    // event.details is typed as BashToolDetails
  }

  const response = await fetch("https://example.com/summarize", {
    method: "POST",
    body: JSON.stringify({ content: event.content }),
    signal: ctx.signal,
  });

  // Modify result:
  return { content: [...], details: {...}, isError: false };
});
```

### User Bash Events

#### user_bash

Fired when user executes `!` or `!!` commands. **Can intercept.**

```typescript
import { createLocalBashOperations } from "@mariozechner/pi-coding-agent";

pi.on("user_bash", (event, ctx) => {
  // event.command - the bash command
  // event.excludeFromContext - true if !! prefix
  // event.cwd - working directory

  // Option 1: Provide custom operations (e.g., SSH)
  return { operations: remoteBashOps };

  // Option 2: Wrap pi's built-in local bash backend
  const local = createLocalBashOperations();
  return {
    operations: {
      exec(command, cwd, options) {
        return local.exec(`source ~/.profile\n${command}`, cwd, options);
      }
    }
  };

  // Option 3: Full replacement - return result directly
  return { result: { output: "...", exitCode: 0, cancelled: false, truncated: false } };
});
```

### Input Events

#### input

Fired when user input is received, after extension commands are checked but before skill and template expansion. The event sees the raw input text, so `/skill:foo` and `/template` are not yet expanded.

**Processing order:**
1. Extension commands (`/cmd`) checked first - if found, handler runs and input event is skipped
2. `input` event fires - can intercept, transform, or handle
3. If not handled: skill commands (`/skill:name`) expanded to skill content
4. If not handled: prompt templates (`/template`) expanded to template content
5. Agent processing begins (`before_agent_start`, etc.)

```typescript
pi.on("input", async (event, ctx) => {
  // event.text - raw input (before skill/template expansion)
  // event.images - attached images, if any
  // event.source - "interactive" (typed), "rpc" (API), or "extension" (via sendUserMessage)

  // Transform: rewrite input before expansion
  if (event.text.startsWith("?quick "))
    return { action: "transform", text: `Respond briefly: ${event.text.slice(7)}` };

  // Handle: respond without LLM (extension shows its own feedback)
  if (event.text === "ping") {
    ctx.ui.notify("pong", "info");
    return { action: "handled" };
  }

  // Route by source: skip processing for extension-injected messages
  if (event.source === "extension") return { action: "continue" };

  // Intercept skill commands before expansion
  if (event.text.startsWith("/skill:")) {
    // Could transform, block, or let pass through
  }

  return { action: "continue" };  // Default: pass through to expansion
});
```

**Results:**
- `continue` - pass through unchanged (default if handler returns nothing)
- `transform` - modify text/images, then continue to expansion
- `handled` - skip agent entirely (first handler to return this wins)

Transforms chain across handlers. See [input-transform.ts](../examples/extensions/input-transform.ts).

## ExtensionContext

All handlers receive `ctx: ExtensionContext`.

### ctx.ui

UI methods for user interaction. See [Custom UI](#custom-ui) for full details.

### ctx.hasUI

`false` in print mode (`-p`) and JSON mode. `true` in interactive and RPC mode. In RPC mode, dialog methods (`select`, `confirm`, `input`, `editor`) work via the extension UI sub-protocol, and fire-and-forget methods (`notify`, `setStatus`, `setWidget`, `setTitle`, `setEditorText`) emit requests to the client. Some TUI-specific methods are no-ops or return defaults (see [rpc.md](rpc.md#extension-ui-protocol)).

### ctx.cwd

Current working directory.

### ctx.sessionManager

Read-only access to session state. See [session.md](session.md) for the full SessionManager API and entry types.

For `tool_call`, this state is synchronized through the current assistant message before handlers run. In parallel tool execution mode it is still not guaranteed to include sibling tool results from the same assistant message.

```typescript
ctx.sessionManager.getEntries()       // All entries
ctx.sessionManager.getBranch()        // Current branch
ctx.sessionManager.getLeafId()        // Current leaf entry ID
```

### ctx.modelRegistry / ctx.model

Access to models and API keys.

### ctx.signal

The current agent abort signal, or `undefined` when no agent turn is active.

Use this for abort-aware nested work started by extension handlers, for example:
- `fetch(..., { signal: ctx.signal })`
- model calls that accept `signal`
- file or process helpers that accept `AbortSignal`

`ctx.signal` is typically defined during active turn events such as `tool_call`, `tool_result`, `message_update`, and `turn_end`.
It is usually `undefined` in idle or non-turn contexts such as session events, extension commands, and shortcuts fired while pi is idle.

```typescript
pi.on("tool_result", async (event, ctx) => {
  const response = await fetch("https://example.com/api", {
    method: "POST",
    body: JSON.stringify(event),
    signal: ctx.signal,
  });

  const data = await response.json();
  return { details: data };
});
```

### ctx.isIdle() / ctx.abort() / ctx.hasPendingMessages()

Control flow helpers.

### ctx.shutdown()

Request a graceful shutdown of pi.

- **Interactive mode:** Deferred until the agent becomes idle (after processing all queued steering and follow-up messages).
- **RPC mode:** Deferred until the next idle state (after completing the current command response, when waiting for the next command).
- **Print mode:** No-op. The process exits automatically when all prompts are processed.

Emits `session_shutdown` event to all extensions before exiting. Available in all contexts (event handlers, tools, commands, shortcuts).

```typescript
pi.on("tool_call", (event, ctx) => {
  if (isFatal(event.input)) {
    ctx.shutdown();
  }
});
```

### ctx.getContextUsage()

Returns current context usage for the active model. Uses last assistant usage when available, then estimates tokens for trailing messages.

```typescript
const usage = ctx.getContextUsage();
if (usage && usage.tokens > 100_000) {
  // ...
}
```

### ctx.compact()

Trigger compaction without awaiting completion. Use `onComplete` and `onError` for follow-up actions.

```typescript
ctx.compact({
  customInstructions: "Focus on recent changes",
  onComplete: (result) => {
    ctx.ui.notify("Compaction completed", "info");
  },
  onError: (error) => {
    ctx.ui.notify(`Compaction failed: ${error.message}`, "error");
  },
});
```

### ctx.getSystemPrompt()

Returns the current effective system prompt. This includes any modifications made by `before_agent_start` handlers for the current turn.

```typescript
pi.on("before_agent_start", (event, ctx) => {
  const prompt = ctx.getSystemPrompt();
  console.log(`System prompt length: ${prompt.length}`);
});
```

## ExtensionCommandContext

Command handlers receive `ExtensionCommandContext`, which extends `ExtensionContext` with session control methods. These are only available in commands because they can deadlock if called from event handlers.

### ctx.waitForIdle()

Wait for the agent to finish streaming:

```typescript
pi.registerCommand("my-cmd", {
  handler: async (args, ctx) => {
    await ctx.waitForIdle();
    // Agent is now idle, safe to modify session
  },
});
```

### ctx.newSession(options?)

Create a new session:

```typescript
const result = await ctx.newSession({
  parentSession: ctx.sessionManager.getSessionFile(),
  setup: async (sm) => {
    sm.appendMessage({
      role: "user",
      content: [{ type: "text", text: "Context from previous session..." }],
      timestamp: Date.now(),
    });
  },
});

if (result.cancelled) {
  // An extension cancelled the new session
}
```

### ctx.fork(entryId)

Fork from a specific entry, creating a new session file:

```typescript
const result = await ctx.fork("entry-id-123");
if (!result.cancelled) {
  // Now in the forked session
}
```

### ctx.navigateTree(targetId, options?)

Navigate to a different point in the session tree:

```typescript
const result = await ctx.navigateTree("entry-id-456", {
  summarize: true,
  customInstructions: "Focus on error handling changes",
  replaceInstructions: false, // true = replace default prompt entirely
  label: "review-checkpoint",
});
```

Options:
- `summarize`: Whether to generate a summary of the abandoned branch
- `customInstructions`: Custom instructions for the summarizer
- `replaceInstructions`: If true, `customInstructions` replaces the default prompt instead of being appended
- `label`: Label to attach to the branch summary entry (or target entry if not summarizing)

### ctx.switchSession(sessionPath)

Switch to a different session file:

```typescript
const result = await ctx.switchSession("/path/to/session.jsonl");
if (result.cancelled) {
  // An extension cancelled the switch via session_before_switch
}
```

To discover available sessions, use the static `SessionManager.list()` or `SessionManager.listAll()` methods:

```typescript
import { SessionManager } from "@mariozechner/pi-coding-agent";

pi.registerCommand("switch", {
  description: "Switch to another session",
  handler: async (args, ctx) => {
    const sessions = await SessionManager.list(ctx.cwd);
    if (sessions.length === 0) return;
    const choice = await ctx.ui.select(
      "Pick session:",
      sessions.map(s => s.file),
    );
    if (choice) {
      await ctx.switchSession(choice);
    }
  },
});
```

### ctx.reload()

Run the same reload flow as `/reload`.

```typescript
pi.registerCommand("reload-runtime", {
  description: "Reload extensions, skills, prompts, and themes",
  handler: async (_args, ctx) => {
    await ctx.reload();
    return;
  },
});
```

Important behavior:
- `await ctx.reload()` emits `session_shutdown` for the current extension runtime
- It then reloads resources and emits `session_start` with `reason: "reload"` and `resources_discover` with reason `"reload"`
- The currently running command handler still continues in the old call frame
- Code after `await ctx.reload()` still runs from the pre-reload version
- Code after `await ctx.reload()` must not assume old in-memory extension state is still valid
- After the handler returns, future commands/events/tool calls use the new extension version

For predictable behavior, treat reload as terminal for that handler (`await ctx.reload(); return;`).

Tools run with `ExtensionContext`, so they cannot call `ctx.reload()` directly. Use a command as the reload entrypoint, then expose a tool that queues that command as a follow-up user message.

Example tool the LLM can call to trigger reload:

```typescript
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { Type } from "@sinclair/typebox";

export default function (pi: ExtensionAPI) {
  pi.registerCommand("reload-runtime", {
    description: "Reload extensions, skills, prompts, and themes",
    handler: async (_args, ctx) => {
      await ctx.reload();
      return;
    },
  });

  pi.registerTool({
    name: "reload_runtime",
    label: "Reload Runtime",
    description: "Reload extensions, skills, prompts, and themes",
    parameters: Type.Object({}),
    async execute() {
      pi.sendUserMessage("/reload-runtime", { deliverAs: "followUp" });
      return {
        content: [{ type: "text", text: "Queued /reload-runtime as a follow-up command." }],
      };
    },
  });
}
```

## ExtensionAPI Methods

### pi.on(event, handler)

Subscribe to events. See [Events](#events) for event types and return values.

### pi.registerTool(definition)

Register a custom tool callable by the LLM. See [Custom Tools](#custom-tools) for full details.

`pi.registerTool()` works both during extension load and after startup. You can call it inside `session_start`, command handlers, or other event handlers. New tools are refreshed immediately in the same session, so they appear in `pi.getAllTools()` and are callable by the LLM without `/reload`.

Use `pi.setActiveTools()` to enable or disable tools (including dynamically added tools) at runtime.

Use `promptSnippet` to opt a custom tool into a one-line entry in `Available tools`, and `promptGuidelines` to append tool-specific bullets to the default `Guidelines` section when the tool is active.

See [dynamic-tools.ts](../examples/extensions/dynamic-tools.ts) for a full example.

```typescript
import { Type } from "@sinclair/typebox";
import { StringEnum } from "@mariozechner/pi-ai";

pi.registerTool({
  name: "my_tool",
  label: "My Tool",
  description: "What this tool does",
  promptSnippet: "Summarize or transform text according to action",
  promptGuidelines: ["Use this tool when the user asks to summarize previously generated text."],
  parameters: Type.Object({
    action: StringEnum(["list", "add"] as const),
    text: Type.Optional(Type.String()),
  }),
  prepareArguments(args) {
    // Optional compatibility shim. Runs before schema validation.
    // Return the current schema shape, for example to fold legacy fields
    // into the modern parameter object.
    return args;
  },

  async execute(toolCallId, params, signal, onUpdate, ctx) {
    // Stream progress
    onUpdate?.({ content: [{ type: "text", text: "Working..." }] });

    return {
      content: [{ type: "text", text: "Done" }],
      details: { result: "..." },
    };
  },

  // Optional: Custom rendering
  renderCall(args, theme, context) { ... },
  renderResult(result, options, theme, context) { ... },
});
```

### pi.sendMessage(message, options?)

Inject a custom message into the session.

```typescript
pi.sendMessage({
  customType: "my-extension",
  content: "Message text",
  display: true,
  details: { ... },
}, {
  triggerTurn: true,
  deliverAs: "steer",
});
```

**Options:**
- `deliverAs` - Delivery mode:
  - `"steer"` (default) - Queues the message while streaming. Delivered after the current assistant turn finishes executing its tool calls, before the next LLM call.
  - `"followUp"` - Waits for agent to finish. Delivered only when agent has no more tool calls.
  - `"nextTurn"` - Queued for next user prompt. Does not interrupt or trigger anything.
- `triggerTurn: true` - If agent is idle, trigger an LLM response immediately. Only applies to `"steer"` and `"followUp"` modes (ignored for `"nextTurn"`).

### pi.sendUserMessage(content, options?)

Send a user message to the agent. Unlike `sendMessage()` which sends custom messages, this sends an actual user message that appears as if typed by the user. Always triggers a turn.

```typescript
// Simple text message
pi.sendUserMessage("What is 2+2?");

// With content array (text + images)
pi.sendUserMessage([
  { type: "text", text: "Describe this image:" },
  { type: "image", source: { type: "base64", mediaType: "image/png", data: "..." } },
]);

// During streaming - must specify delivery mode
pi.sendUserMessage("Focus on error handling", { deliverAs: "steer" });
pi.sendUserMessage("And then summarize", { deliverAs: "followUp" });
```

**Options:**
- `deliverAs` - Required when agent is streaming:
  - `"steer"` - Queues the message for delivery after the current assistant turn finishes executing its tool calls
  - `"followUp"` - Waits for agent to finish all tools

When not streaming, the message is sent immediately and triggers a new turn. When streaming without `deliverAs`, throws an error.

See [send-user-message.ts](../examples/extensions/send-user-message.ts) for a complete example.

### pi.appendEntry(customType, data?)

Persist extension state (does NOT participate in LLM context).

```typescript
pi.appendEntry("my-state", { count: 42 });

// Restore on reload
pi.on("session_start", async (_event, ctx) => {
  for (const entry of ctx.sessionManager.getEntries()) {
    if (entry.type === "custom" && entry.customType === "my-state") {
      // Reconstruct from entry.data
    }
  }
});
```

### pi.setSessionName(name)

Set the session display name (shown in session selector instead of first message).

```typescript
pi.setSessionName("Refactor auth module");
```

### pi.getSessionName()

Get the current session name, if set.

```typescript
const name = pi.getSessionName();
if (name) {
  console.log(`Session: ${name}`);
}
```

### pi.setLabel(entryId, label)

Set or clear a label on an entry. Labels are user-defined markers for bookmarking and navigation (shown in `/tree` selector).

```typescript
// Set a label
pi.setLabel(entryId, "checkpoint-before-refactor");

// Clear a label
pi.setLabel(entryId, undefined);

// Read labels via sessionManager
const label = ctx.sessionManager.getLabel(entryId);
```

Labels persist in the session and survive restarts. Use them to mark important points (turns, checkpoints) in the conversation tree.

### pi.registerCommand(name, options)

Register a command.

If multiple extensions register the same command name, pi keeps them all and assigns numeric invocation suffixes in load order, for example `/review:1` and `/review:2`.

```typescript
pi.registerCommand("stats", {
  description: "Show session statistics",
  handler: async (args, ctx) => {
    const count = ctx.sessionManager.getEntries().length;
    ctx.ui.notify(`${count} entries`, "info");
  }
});
```

Optional: add argument auto-completion for `/command ...`:

```typescript
import type { AutocompleteItem } from "@mariozechner/pi-tui";

pi.registerCommand("deploy", {
  description: "Deploy to an environment",
  getArgumentCompletions: (prefix: string): AutocompleteItem[] | null => {
    const envs = ["dev", "staging", "prod"];
    const items = envs.map((e) => ({ value: e, label: e }));
    const filtered = items.filter((i) => i.value.startsWith(prefix));
    return filtered.length > 0 ? filtered : null;
  },
  handler: async (args, ctx) => {
    ctx.ui.notify(`Deploying: ${args}`, "info");
  },
});
```

### pi.getCommands()

Get the slash commands available for invocation via `prompt` in the current session. Includes extension commands, prompt templates, and skill commands.
The list matches the RPC `get_commands` ordering: extensions first, then templates, then skills.

```typescript
const commands = pi.getCommands();
const bySource = commands.filter((command) => command.source === "extension");
const userScoped = commands.filter((command) => command.sourceInfo.scope === "user");
```

Each entry has this shape:

```typescript
{
  name: string; // Invokable command name without the leading slash. May be suffixed like "review:1"
  description?: string;
  source: "extension" | "prompt" | "skill";
  sourceInfo: {
    path: string;
    source: string;
    scope: "user" | "project" | "temporary";
    origin: "package" | "top-level";
    baseDir?: string;
  };
}
```

Use `sourceInfo` as the canonical provenance field. Do not infer ownership from command names or from ad hoc path parsing.

Built-in interactive commands (like `/model` and `/settings`) are not included here. They are handled only in interactive
mode and would not execute if sent via `prompt`.

### pi.registerMessageRenderer(customType, renderer)

Register a custom TUI renderer for messages with your `customType`. See [Custom UI](#custom-ui).

### pi.registerShortcut(shortcut, options)

Register a keyboard shortcut. See [keybindings.md](keybindings.md) for the shortcut format and built-in keybindings.

```typescript
pi.registerShortcut("ctrl+shift+p", {
  description: "Toggle plan mode",
  handler: async (ctx) => {
    ctx.ui.notify("Toggled!");
  },
});
```

### pi.registerFlag(name, options)

Register a CLI flag.

```typescript
pi.registerFlag("plan", {
  description: "Start in plan mode",
  type: "boolean",
  default: false,
});

// Check value
if (pi.getFlag("--plan")) {
  // Plan mode enabled
}
```

### pi.exec(command, args, options?)

Execute a shell command.

```typescript
const result = await pi.exec("git", ["status"], { signal, timeout: 5000 });
// result.stdout, result.stderr, result.code, result.killed
```

### pi.getActiveTools() / pi.getAllTools() / pi.setActiveTools(names)

Manage active tools. This works for both built-in tools and dynamically registered tools.

```typescript
const active = pi.getActiveTools();
const all = pi.getAllTools();
// [{
//   name: "read",
//   description: "Read file contents...",
//   parameters: ..., 
//   sourceInfo: { path: "<builtin:read>", source: "builtin", scope: "temporary", origin: "top-level" }
// }, ...]
const names = all.map(t => t.name);
const builtinTools = all.filter((t) => t.sourceInfo.source === "builtin");
const extensionTools = all.filter((t) => t.sourceInfo.source !== "builtin" && t.sourceInfo.source !== "sdk");
pi.setActiveTools(["read", "bash"]); // Switch to read-only
```

`pi.getAllTools()` returns `name`, `description`, `parameters`, and `sourceInfo`.

Typical `sourceInfo.source` values:
- `builtin` for built-in tools
- `sdk` for tools passed via `createAgentSession({ customTools })`
- extension source metadata for tools registered by extensions

### pi.setModel(model)

Set the current model. Returns `false` if no API key is available for the model. See [models.md](models.md) for configuring custom models.

```typescript
const model = ctx.modelRegistry.find("anthropic", "claude-sonnet-4-5");
if (model) {
  const success = await pi.setModel(model);
  if (!success) {
    ctx.ui.notify("No API key for this model", "error");
  }
}
```

### pi.getThinkingLevel() / pi.setThinkingLevel(level)

Get or set the thinking level. Level is clamped to model capabilities (non-reasoning models always use "off").

```typescript
const current = pi.getThinkingLevel();  // "off" | "minimal" | "low" | "medium" | "high" | "xhigh"
pi.setThinkingLevel("high");
```

### pi.events

Shared event bus for communication between extensions:

```typescript
pi.events.on("my:event", (data) => { ... });
pi.events.emit("my:event", { ... });
```

### pi.registerProvider(name, config)

Register or override a model provider dynamically. Useful for proxies, custom endpoints, or team-wide model configurations.

Calls made during the extension factory function are queued and applied once the runner initialises. Calls made after that — for example from a command handler following a user setup flow — take effect immediately without requiring a `/reload`.

```typescript
// Register a new provider with custom models
pi.registerProvider("my-proxy", {
  baseUrl: "https://proxy.example.com",
  apiKey: "PROXY_API_KEY",  // env var name or literal
  api: "anthropic-messages",
  models: [
    {
      id: "claude-sonnet-4-20250514",
      name: "Claude 4 Sonnet (proxy)",
      reasoning: false,
      input: ["text", "image"],
      cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
      contextWindow: 200000,
      maxTokens: 16384
    }
  ]
});

// Override baseUrl for an existing provider (keeps all models)
pi.registerProvider("anthropic", {
  baseUrl: "https://proxy.example.com"
});

// Register provider with OAuth support for /login
pi.registerProvider("corporate-ai", {
  baseUrl: "https://ai.corp.com",
  api: "openai-responses",
  models: [...],
  oauth: {
    name: "Corporate AI (SSO)",
    async login(callbacks) {
      // Custom OAuth flow
      callbacks.onAuth({ url: "https://sso.corp.com/..." });
      const code = await callbacks.onPrompt({ message: "Enter code:" });
      return { refresh: code, access: code, expires: Date.now() + 3600000 };
    },
    async refreshToken(credentials) {
      // Refresh logic
      return credentials;
    },
    getApiKey(credentials) {
      return credentials.access;
    }
  }
});
```

**Config options:**
- `baseUrl` - API endpoint URL. Required when defining models.
- `apiKey` - API key or environment variable name. Required when defining models (unless `oauth` provided).
- `api` - API type: `"anthropic-messages"`, `"openai-completions"`, `"openai-responses"`, etc.
- `headers` - Custom headers to include in requests.
- `authHeader` - If true, adds `Authorization: Bearer` header automatically.
- `models` - Array of model definitions. If provided, replaces all existing models for this provider.
- `oauth` - OAuth provider config for `/login` support. When provided, the provider appears in the login menu.
- `streamSimple` - Custom streaming implementation for non-standard APIs.

See [custom-provider.md](custom-provider.md) for advanced topics: custom streaming APIs, OAuth details, model definition reference.

### pi.unregisterProvider(name)

Remove a previously registered provider and its models. Built-in models that were overridden by the provider are restored. Has no effect if the provider was not registered.

Like `registerProvider`, this takes effect immediately when called after the initial load phase, so a `/reload` is not required.

```typescript
pi.registerCommand("my-setup-teardown", {
  description: "Remove the custom proxy provider",
  handler: async (_args, _ctx) => {
    pi.unregisterProvider("my-proxy");
  },
});
```

## State Management

Extensions with state should store it in tool result `details` for proper branching support:

```typescript
export default function (pi: ExtensionAPI) {
  let items: string[] = [];

  // Reconstruct state from session
  pi.on("session_start", async (_event, ctx) => {
    items = [];
    for (const entry of ctx.sessionManager.getBranch()) {
      if (entry.type === "message" && entry.message.role === "toolResult") {
        if (entry.message.toolName === "my_tool") {
          items = entry.message.details?.items ?? [];
        }
      }
    }
  });

  pi.registerTool({
    name: "my_tool",
    // ...
    async execute(toolCallId, params, signal, onUpdate, ctx) {
      items.push("new item");
      return {
        content: [{ type: "text", text: "Added" }],
        details: { items: [...items] },  // Store for reconstruction
      };
    },
  });
}
```

## Custom Tools

Register tools the LLM can call via `pi.registerTool()`. Tools appear in the system prompt and can have custom rendering.

Use `promptSnippet` for a short one-line entry in the `Available tools` section in the default system prompt. If omitted, custom tools are left out of that section.

Use `promptGuidelines` to add tool-specific bullets to the default system prompt `Guidelines` section. These bullets are included only while the tool is active (for example, after `pi.setActiveTools([...])`).

Note: Some models are idiots and include the @ prefix in tool path arguments. Built-in tools strip a leading @ before resolving paths. If your custom tool accepts a path, normalize a leading @ as well.

If your custom tool mutates files, use `withFileMutationQueue()` so it participates in the same per-file queue as built-in `edit` and `write`. This matters because tool calls run in parallel by default. Without the queue, two tools can read the same old file contents, compute different updates, and then whichever write lands last overwrites the other.

Example failure case: your custom tool edits `foo.ts` while built-in `edit` also changes `foo.ts` in the same assistant turn. If your tool does not participate in the queue, both can read the original `foo.ts`, apply separate changes, and one of those changes is lost.

Pass the real target file path to `withFileMutationQueue()`, not the raw user argument. Resolve it to an absolute path first, relative to `ctx.cwd` or your tool's working directory. For existing files, the helper canonicalizes through `realpath()`, so symlink aliases for the same file share one queue. For new files, it falls back to the resolved absolute path because there is nothing to `realpath()` yet.

Queue the entire mutation window on that target path. That includes read-modify-write logic, not just the final write.

```typescript
import { withFileMutationQueue } from "@mariozechner/pi-coding-agent";
import { mkdir, readFile, writeFile } from "node:fs/promises";
import { dirname, resolve } from "node:path";

async execute(_toolCallId, params, _signal, _onUpdate, ctx) {
  const absolutePath = resolve(ctx.cwd, params.path);

  return withFileMutationQueue(absolutePath, async () => {
    await mkdir(dirname(absolutePath), { recursive: true });
    const current = await readFile(absolutePath, "utf8");
    const next = current.replace(params.oldText, params.newText);
    await writeFile(absolutePath, next, "utf8");

    return {
      content: [{ type: "text", text: `Updated ${params.path}` }],
      details: {},
    };
  });
}
```

### Tool Definition

```typescript
import { Type } from "@sinclair/typebox";
import { StringEnum } from "@mariozechner/pi-ai";
import { Text } from "@mariozechner/pi-tui";

pi.registerTool({
  name: "my_tool",
  label: "My Tool",
  description: "What this tool does (shown to LLM)",
  promptSnippet: "List or add items in the project todo list",
  promptGuidelines: [
    "Use this tool for todo planning instead of direct file edits when the user asks for a task list."
  ],
  parameters: Type.Object({
    action: StringEnum(["list", "add"] as const),  // Use StringEnum for Google compatibility
    text: Type.Optional(Type.String()),
  }),
  prepareArguments(args) {
    if (!args || typeof args !== "object") return args;
    const input = args as { action?: string; oldAction?: string };
    if (typeof input.oldAction === "string" && input.action === undefined) {
      return { ...input, action: input.oldAction };
    }
    return args;
  },

  async execute(toolCallId, params, signal, onUpdate, ctx) {
    // Check for cancellation
    if (signal?.aborted) {
      return { content: [{ type: "text", text: "Cancelled" }] };
    }

    // Stream progress updates
    onUpdate?.({
      content: [{ type: "text", text: "Working..." }],
      details: { progress: 50 },
    });

    // Run commands via pi.exec (captured from extension closure)

[Showing lines 1-1534 of 2263 (50.0KB limit). Use offset=1535 to continue.]
```

## Assistant

**Result** (success):
```
<!-- OSS_WEEKEND_START -->
# 🏖️ OSS Weekend

**Issue tracker reopens Monday, April 13, 2026.**

OSS weekend runs Thursday, April 2, 2026 through Monday, April 13, 2026. New issues and PRs from unapproved contributors are auto-closed during this time. Approved contributors can still open issues and PRs if something is genuinely urgent, but please keep that to pressing matters only. For support, join [Discord](https://discord.com/invite/3cU7Bz4UPx).

> _Current focus: at the moment i'm deep in refactoring internals, and need to focus._
<!-- OSS_WEEKEND_END -->

---

<p align="center">
  <a href="https://shittycodingagent.ai">
    <img src="https://shittycodingagent.ai/logo.svg" alt="pi logo" width="128">
  </a>
</p>
<p align="center">
  <a href="https://discord.com/invite/3cU7Bz4UPx"><img alt="Discord" src="https://img.shields.io/badge/discord-community-5865F2?style=flat-square&logo=discord&logoColor=white" /></a>
  <a href="https://www.npmjs.com/package/@mariozechner/pi-coding-agent"><img alt="npm" src="https://img.shields.io/npm/v/@mariozechner/pi-coding-agent?style=flat-square" /></a>
  <a href="https://github.com/badlogic/pi-mono/actions/workflows/ci.yml"><img alt="Build status" src="https://img.shields.io/github/actions/workflow/status/badlogic/pi-mono/ci.yml?style=flat-square&branch=main" /></a>
</p>
<p align="center">
  <a href="https://pi.dev">pi.dev</a> domain graciously donated by
  <br /><br />
  <a href="https://exe.dev"><img src="docs/images/exy.png" alt="Exy mascot" width="48" /><br />exe.dev</a>
</p>

Pi is a minimal terminal coding harness. Adapt pi to your workflows, not the other way around, without having to fork and modify pi internals. Extend it with TypeScript [Extensions](#extensions), [Skills](#skills), [Prompt Templates](#prompt-templates), and [Themes](#themes). Put your extensions, skills, prompt templates, and themes in [Pi Packages](#pi-packages) and share them with others via npm or git.

Pi ships with powerful defaults but skips features like sub agents and plan mode. Instead, you can ask pi to build what you want or install a third party pi package that matches your workflow.

Pi runs in four modes: interactive, print or JSON, RPC for process integration, and an SDK for embedding in your own apps. See [openclaw/openclaw](https://github.com/openclaw/openclaw) for a real-world SDK integration.

## Table of Contents

- [Quick Start](#quick-start)
- [Providers & Models](#providers--models)
- [Interactive Mode](#interactive-mode)
  - [Editor](#editor)
  - [Commands](#commands)
  - [Keyboard Shortcuts](#keyboard-shortcuts)
  - [Message Queue](#message-queue)
- [Sessions](#sessions)
  - [Branching](#branching)
  - [Compaction](#compaction)
- [Settings](#settings)
- [Context Files](#context-files)
- [Customization](#customization)
  - [Prompt Templates](#prompt-templates)
  - [Skills](#skills)
  - [Extensions](#extensions)
  - [Themes](#themes)
  - [Pi Packages](#pi-packages)
- [Programmatic Usage](#programmatic-usage)
- [Philosophy](#philosophy)
- [CLI Reference](#cli-reference)

---

## Quick Start

```bash
npm install -g @mariozechner/pi-coding-agent
```

Authenticate with an API key:

```bash
export ANTHROPIC_API_KEY=[REDACTED]
pi
```

Or use your existing subscription:

```bash
pi
/login  # Then select provider
```

Then just talk to pi. By default, pi gives the model four tools: `read`, `write`, `edit`, and `bash`. The model uses these to fulfill your requests. Add capabilities via [skills](#skills), [prompt templates](#prompt-templates), [extensions](#extensions), or [pi packages](#pi-packages).

**Platform notes:** [Windows](docs/windows.md) | [Termux (Android)](docs/termux.md) | [tmux](docs/tmux.md) | [Terminal setup](docs/terminal-setup.md) | [Shell aliases](docs/shell-aliases.md)

---

## Providers & Models

For each built-in provider, pi maintains a list of tool-capable models, updated with every release. Authenticate via subscription (`/login`) or API key, then select any model from that provider via `/model` (or Ctrl+L).

**Subscriptions:**
- Anthropic Claude Pro/Max
- OpenAI ChatGPT Plus/Pro (Codex)
- GitHub Copilot
- Google Gemini CLI
- Google Antigravity

**API keys:**
- Anthropic
- OpenAI
- Azure OpenAI
- Google Gemini
- Google Vertex
- Amazon Bedrock
- Mistral
- Groq
- Cerebras
- xAI
- OpenRouter
- Vercel AI Gateway
- ZAI
- OpenCode Zen
- OpenCode Go
- Hugging Face
- Kimi For Coding
- MiniMax

See [docs/providers.md](docs/providers.md) for detailed setup instructions.

**Custom providers & models:** Add providers via `~/.pi/agent/models.json` if they speak a supported API (OpenAI, Anthropic, Google). For custom APIs or OAuth, use extensions. See [docs/models.md](docs/models.md) and [docs/custom-provider.md](docs/custom-provider.md).

---

## Interactive Mode

<p align="center"><img src="docs/images/interactive-mode.png" alt="Interactive Mode" width="600"></p>

The interface from top to bottom:

- **Startup header** - Shows shortcuts (`/hotkeys` for all), loaded AGENTS.md files, prompt templates, skills, and extensions
- **Messages** - Your messages, assistant responses, tool calls and results, notifications, errors, and extension UI
- **Editor** - Where you type; border color indicates thinking level
- **Footer** - Working directory, session name, total token/cache usage, cost, context usage, current model

The editor can be temporarily replaced by other UI, like built-in `/settings` or custom UI from extensions (e.g., a Q&A tool that lets the user answer model questions in a structured format). [Extensions](#extensions) can also replace the editor, add widgets above/below it, a status line, custom footer, or overlays.

### Editor

| Feature | How |
|---------|-----|
| File reference | Type `@` to fuzzy-search project files |
| Path completion | Tab to complete paths |
| Multi-line | Shift+Enter (or Ctrl+Enter on Windows Terminal) |
| Images | Ctrl+V to paste (Alt+V on Windows), or drag onto terminal |
| Bash commands | `!command` runs and sends output to LLM, `!!command` runs without sending |

Standard editing keybindings for delete word, undo, etc. See [docs/keybindings.md](docs/keybindings.md).

### Commands

Type `/` in the editor to trigger commands. [Extensions](#extensions) can register custom commands, [skills](#skills) are available as `/skill:name`, and [prompt templates](#prompt-templates) expand via `/templatename`.

| Command | Description |
|---------|-------------|
| `/login`, `/logout` | OAuth authentication |
| `/model` | Switch models |
| `/scoped-models` | Enable/disable models for Ctrl+P cycling |
| `/settings` | Thinking level, theme, message delivery, transport |
| `/resume` | Pick from previous sessions |
| `/new` | Start a new session |
| `/name <name>` | Set session display name |
| `/session` | Show session info (path, tokens, cost) |
| `/tree` | Jump to any point in the session and continue from there |
| `/fork` | Create a new session from the current branch |
| `/compact [prompt]` | Manually compact context, optional custom instructions |
| `/copy` | Copy last assistant message to clipboard |
| `/export [file]` | Export session to HTML file |
| `/share` | Upload as private GitHub gist with shareable HTML link |
| `/reload` | Reload keybindings, extensions, skills, prompts, and context files (themes hot-reload automatically) |
| `/hotkeys` | Show all keyboard shortcuts |
| `/changelog` | Display version history |
| `/quit` | Quit pi |

### Keyboard Shortcuts

See `/hotkeys` for the full list. Customize via `~/.pi/agent/keybindings.json`. See [docs/keybindings.md](docs/keybindings.md).

**Commonly used:**

| Key | Action |
|-----|--------|
| Ctrl+C | Clear editor |
| Ctrl+C twice | Quit |
| Escape | Cancel/abort |
| Escape twice | Open `/tree` |
| Ctrl+L | Open model selector |
| Ctrl+P / Shift+Ctrl+P | Cycle scoped models forward/backward |
| Shift+Tab | Cycle thinking level |
| Ctrl+O | Collapse/expand tool output |
| Ctrl+T | Collapse/expand thinking blocks |

### Message Queue

Submit messages while the agent is working:

- **Enter** queues a *steering* message, delivered after the current assistant turn finishes executing its tool calls
- **Alt+Enter** queues a *follow-up* message, delivered only after the agent finishes all work
- **Escape** aborts and restores queued messages to editor
- **Alt+Up** retrieves queued messages back to editor

On Windows Terminal, `Alt+Enter` is fullscreen by default. Remap it in [docs/terminal-setup.md](docs/terminal-setup.md) so pi can receive the follow-up shortcut.

Configure delivery in [settings](docs/settings.md): `steeringMode` and `followUpMode` can be `"one-at-a-time"` (default, waits for response) or `"all"` (delivers all queued at once). `transport` selects provider transport preference (`"sse"`, `"websocket"`, or `"auto"`) for providers that support multiple transports.

---

## Sessions

Sessions are stored as JSONL files with a tree structure. Each entry has an `id` and `parentId`, enabling in-place branching without creating new files. See [docs/session.md](docs/session.md) for file format.

### Management

Sessions auto-save to `~/.pi/agent/sessions/` organized by working directory.

```bash
pi -c                  # Continue most recent session
pi -r                  # Browse and select from past sessions
pi --no-session        # Ephemeral mode (don't save)
pi --session <path>    # Use specific session file or ID
pi --fork <path>       # Fork specific session file or ID into a new session
```

### Branching

**`/tree`** - Navigate the session tree in-place. Select any previous point, continue from there, and switch between branches. All history preserved in a single file.

<p align="center"><img src="docs/images/tree-view.png" alt="Tree View" width="600"></p>

- Search by typing, fold/unfold and jump between branches with Ctrl+←/Ctrl+→ or Alt+←/Alt+→, page with ←/→
- Filter modes (Ctrl+O): default → no-tools → user-only → labeled-only → all
- Press Shift+L to label entries as bookmarks and Shift+T to toggle label timestamps

**`/fork`** - Create a new session file from the current branch. Opens a selector, copies history up to the selected point, and places that message in the editor for modification.

**`--fork <path|id>`** - Fork an existing session file or partial session UUID directly from the CLI. This copies the full source session into a new session file in the current project.

### Compaction

Long sessions can exhaust context windows. Compaction summarizes older messages while keeping recent ones.

**Manual:** `/compact` or `/compact <custom instructions>`

**Automatic:** Enabled by default. Triggers on context overflow (recovers and retries) or when approaching the limit (proactive). Configure via `/settings` or `settings.json`.

Compaction is lossy. The full history remains in the JSONL file; use `/tree` to revisit. Customize compaction behavior via [extensions](#extensions). See [docs/compaction.md](docs/compaction.md) for internals.

---

## Settings

Use `/settings` to modify common options, or edit JSON files directly:

| Location | Scope |
|----------|-------|
| `~/.pi/agent/settings.json` | Global (all projects) |
| `.pi/settings.json` | Project (overrides global) |

See [docs/settings.md](docs/settings.md) for all options.

---

## Context Files

Pi loads `AGENTS.md` (or `CLAUDE.md`) at startup from:
- `~/.pi/agent/AGENTS.md` (global)
- Parent directories (walking up from cwd)
- Current directory

Use for project instructions, conventions, common commands. All matching files are concatenated.

### System Prompt

Replace the default system prompt with `.pi/SYSTEM.md` (project) or `~/.pi/agent/SYSTEM.md` (global). Append without replacing via `APPEND_SYSTEM.md`.

---

## Customization

### Prompt Templates

Reusable prompts as Markdown files. Type `/name` to expand.

```markdown
<!-- ~/.pi/agent/prompts/review.md -->
Review this code for bugs, security issues, and performance problems.
Focus on: {{focus}}
```

Place in `~/.pi/agent/prompts/`, `.pi/prompts/`, or a [pi package](#pi-packages) to share with others. See [docs/prompt-templates.md](docs/prompt-templates.md).

### Skills

On-demand capability packages following the [Agent Skills standard](https://agentskills.io). Invoke via `/skill:name` or let the agent load them automatically.

```markdown
<!-- ~/.pi/agent/skills/my-skill/SKILL.md -->
# My Skill
Use this skill when the user asks about X.

## Steps
1. Do this
2. Then that
```

Place in `~/.pi/agent/skills/`, `~/.agents/skills/`, `.pi/skills/`, or `.agents/skills/` (from `cwd` up through parent directories) or a [pi package](#pi-packages) to share with others. See [docs/skills.md](docs/skills.md).

### Extensions

<p align="center"><img src="docs/images/doom-extension.png" alt="Doom Extension" width="600"></p>

TypeScript modules that extend pi with custom tools, commands, keyboard shortcuts, event handlers, and UI components.

```typescript
export default function (pi: ExtensionAPI) {
  pi.registerTool({ name: "deploy", ... });
  pi.registerCommand("stats", { ... });
  pi.on("tool_call", async (event, ctx) => { ... });
}
```

**What's possible:**
- Custom tools (or replace built-in tools entirely)
- Sub-agents and plan mode
- Custom compaction and summarization
- Permission gates and path protection
- Custom editors and UI components
- Status lines, headers, footers
- Git checkpointing and auto-commit
- SSH and sandbox execution
- MCP server integration
- Make pi look like Claude Code
- Games while waiting (yes, Doom runs)
- ...anything you can dream up

Place in `~/.pi/agent/extensions/`, `.pi/extensions/`, or a [pi package](#pi-packages) to share with others. See [docs/extensions.md](docs/extensions.md) and [examples/extensions/](examples/extensions/).

### Themes

Built-in: `dark`, `light`. Themes hot-reload: modify the active theme file and pi immediately applies changes.

Place in `~/.pi/agent/themes/`, `.pi/themes/`, or a [pi package](#pi-packages) to share with others. See [docs/themes.md](docs/themes.md).

### Pi Packages

Bundle and share extensions, skills, prompts, and themes via npm or git. Find packages on [npmjs.com](https://www.npmjs.com/search?q=keywords%3Api-package) or [Discord](https://discord.com/channels/1456806362351669492/1457744485428629628).

> **Security:** Pi packages run with full system access. Extensions execute arbitrary code, and skills can instruct the model to perform any action including running executables. Review source code before installing third-party packages.

```bash
pi install npm:@foo/pi-tools
pi install npm:@foo/pi-tools@1.2.3      # pinned version
pi install git:github.com/user/repo
pi install git:github.com/user/repo@v1  # tag or commit
pi install git:[REDACTED]:user/repo
pi install git:[REDACTED]:user/repo@v1  # tag or commit
pi install https://github.com/user/repo
pi install https://github.com/user/repo@v1      # tag or commit
pi install ssh://git@github.com/user/repo
pi install ssh://git@github.com/user/repo@v1    # tag or commit
pi remove npm:@foo/pi-tools
pi uninstall npm:@foo/pi-tools          # alias for remove
pi list
pi update                               # skips pinned packages
pi config                               # enable/disable extensions, skills, prompts, themes
```

Packages install to `~/.pi/agent/git/` (git) or global npm. Use `-l` for project-local installs (`.pi/git/`, `.pi/npm/`). If you use a Node version manager and want package installs to reuse a stable npm context, set `npmCommand` in `settings.json`, for example `["mise", "exec", "node@20", "--", "npm"]`.

Create a package by adding a `pi` key to `package.json`:

```json
{
  "name": "my-pi-package",
  "keywords": ["pi-package"],
  "pi": {
    "extensions": ["./extensions"],
    "skills": ["./skills"],
    "prompts": ["./prompts"],
    "themes": ["./themes"]
  }
}
```

Without a `pi` manifest, pi auto-discovers from conventional directories (`extensions/`, `skills/`, `prompts/`, `themes/`).

See [docs/packages.md](docs/packages.md).

---

## Programmatic Usage

### SDK

```typescript
import { AuthStorage, createAgentSession, ModelRegistry, SessionManager } from "@mariozechner/pi-coding-agent";

const authStorage = AuthStorage.create();
const modelRegistry = ModelRegistry.create(authStorage);
const { session } = await createAgentSession({
  sessionManager: SessionManager.inMemory(),
  authStorage,
  modelRegistry,
});

await session.prompt("What files are in the current directory?");
```

For advanced multi-session runtime replacement, use `createAgentSessionRuntime()` and `AgentSessionRuntime`.

See [docs/sdk.md](docs/sdk.md) and [examples/sdk/](examples/sdk/).

### RPC Mode

For non-Node.js integrations, use RPC mode over stdin/stdout:

```bash
pi --mode rpc
```

RPC mode uses strict LF-delimited JSONL framing. Clients must split records on `\n` only. Do not use generic line readers like Node `readline`, which also split on Unicode separators inside JSON payloads.

See [docs/rpc.md](docs/rpc.md) for the protocol.

---

## Philosophy

Pi is aggressively extensible so it doesn't have to dictate your workflow. Features that other tools bake in can be built with [extensions](#extensions), [skills](#skills), or installed from third-party [pi packages](#pi-packages). This keeps the core minimal while letting you shape pi to fit how you work.

**No MCP.** Build CLI tools with READMEs (see [Skills](#skills)), or build an extension that adds MCP support. [Why?](https://mariozechner.at/posts/2025-11-02-what-if-you-dont-need-mcp/)

**No sub-agents.** There's many ways to do this. Spawn pi instances via tmux, or build your own with [extensions](#extensions), or install a package that does it your way.

**No permission popups.** Run in a container, or build your own confirmation flow with [extensions](#extensions) inline with your environment and security requirements.

**No plan mode.** Write plans to files, or build it with [extensions](#extensions), or install a package.

**No built-in to-dos.** They confuse models. Use a TODO.md file, or build your own with [extensions](#extensions).

**No background bash.** Use tmux. Full observability, direct interaction.

Read the [blog post](https://mariozechner.at/posts/2025-11-30-pi-coding-agent/) for the full rationale.

---

## CLI Reference

```bash
pi [options] [@files...] [messages...]
```

### Package Commands

```bash
pi install <source> [-l]     # Install package, -l for project-local
pi remove <source> [-l]      # Remove package
pi uninstall <source> [-l]   # Alias for remove
pi update [source]           # Update packages (skips pinned)
pi list                      # List installed packages
pi config                    # Enable/disable package resources
```

### Modes

| Flag | Description |
|------|-------------|
| (default) | Interactive mode |
| `-p`, `--print` | Print response and exit |
| `--mode json` | Output all events as JSON lines (see [docs/json.md](docs/json.md)) |
| `--mode rpc` | RPC mode for process integration (see [docs/rpc.md](docs/rpc.md)) |
| `--export <in> [out]` | Export session to HTML |

In print mode, pi also reads piped stdin and merges it into the initial prompt:

```bash
cat README.md | pi -p "Summarize this text"
```

### Model Options

| Option | Description |
|--------|-------------|
| `--provider <name>` | Provider (anthropic, openai, google, etc.) |
| `--model <pattern>` | Model pattern or ID (supports `provider/id` and optional `:<thinking>`) |
| `--api-key <key>` | API key (overrides env vars) |
| `--thinking <level>` | `off`, `minimal`, `low`, `medium`, `high`, `xhigh` |
| `--models <patterns>` | Comma-separated patterns for Ctrl+P cycling |
| `--list-models [search]` | List available models |

### Session Options

| Option | Description |
|--------|-------------|
| `-c`, `--continue` | Continue most recent session |
| `-r`, `--resume` | Browse and select session |
| `--session <path>` | Use specific session file or partial UUID |
| `--fork <path>` | Fork specific session file or partial UUID into a new session |
| `--session-dir <dir>` | Custom session storage directory |
| `--no-session` | Ephemeral mode (don't save) |

### Tool Options

| Option | Description |
|--------|-------------|
| `--tools <list>` | Enable specific built-in tools (default: `read,bash,edit,write`) |
| `--no-tools` | Disable all built-in tools (extension tools still work) |

Available built-in tools: `read`, `bash`, `edit`, `write`, `grep`, `find`, `ls`

### Resource Options

| Option | Description |
|--------|-------------|
| `-e`, `--extension <source>` | Load extension from path, npm, or git (repeatable) |
| `--no-extensions` | Disable extension discovery |
| `--skill <path>` | Load skill (repeatable) |
| `--no-skills` | Disable skill discovery |
| `--prompt-template <path>` | Load prompt template (repeatable) |
| `--no-prompt-templates` | Disable prompt template discovery |
| `--theme <path>` | Load theme (repeatable) |
| `--no-themes` | Disable theme discovery |

Combine `--no-*` with explicit flags to load exactly what you need, ignoring settings.json (e.g., `--no-extensions -e ./my-ext.ts`).

### Other Options

| Option | Description |
|--------|-------------|
| `--system-prompt <text>` | Replace default prompt (context files and skills still appended) |
| `--append-system-prompt <text>` | Append to system prompt |
| `--verbose` | Force verbose startup |
| `-h`, `--help` | Show help |
| `-v`, `--version` | Show version |

### File Arguments

Prefix files with `@` to include in the message:

```bash
pi @prompt.md "Answer this"
pi -p @screenshot.png "What's in this image?"
pi @code.ts @test.ts "Review these files"
```

### Examples

```bash
# Interactive with initial prompt
pi "List all .ts files in src/"

# Non-interactive
pi -p "Summarize this codebase"

# Non-interactive with piped stdin
cat README.md | pi -p "Summarize this text"

# Different model
pi --provider openai --model gpt-4o "Help me refactor"

# Model with provider prefix (no --provider needed)
pi --model openai/gpt-4o "Help me refactor"

# Model with thinking level shorthand
pi --model sonnet:high "Solve this complex problem"

# Limit model cycling
pi --models "claude-*,gpt-4o"

# Read-only mode
pi --tools read,grep,find,ls -p "Review the code"

# High thinking level
pi --thinking high "Solve this complex problem"
```

### Environment Variables

| Variable | Description |
|----------|-------------|
| `PI_CODING_AGENT_DIR` | Override config directory (default: `~/.pi/agent`) |
| `PI_PACKAGE_DIR` | Override package directory (useful for Nix/Guix where store paths tokenize poorly) |
| `PI_SKIP_VERSION_CHECK` | Skip version check at startup |
| `PI_CACHE_RETENTION` | Set to `long` for extended prompt cache (Anthropic: 1h, OpenAI: 24h) |
| `VISUAL`, `EDITOR` | External editor for Ctrl+G |

---

## Contributing & Development

See [CONTRIBUTING.md](../../CONTRIBUTING.md) for guidelines and [docs/development.md](docs/development.md) for setup, forking, and debugging.

---

## License

MIT

## See Also

- [@mariozechner/pi-ai](https://www.npmjs.com/package/@mariozechner/pi-ai): Core LLM toolkit
- [@mariozechner/pi-agent](https://www.npmjs.com/package/@mariozechner/pi-agent): Agent framework
- [@mariozechner/pi-tui](https://www.npmjs.com/package/@mariozechner/pi-tui): Terminal UI components
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "find skills -maxdepth 3 -type d | head -n 200"
}
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "find skills -maxdepth 4 -type f | rg \"manim|creative\" -n"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "skills/creative/manim-video/SKILL.md"
}
```

## Assistant

**Result** (success):
```
skills
skills/apple
skills/apple/apple-reminders
skills/apple/imessage
skills/apple/findmy
skills/apple/apple-notes
skills/research
skills/research/blogwatcher
skills/research/polymarket
skills/research/polymarket/references
skills/research/polymarket/scripts
skills/research/llm-wiki
skills/research/arxiv
skills/research/arxiv/scripts
skills/research/research-paper-writing
skills/research/research-paper-writing/references
skills/research/research-paper-writing/templates
skills/gaming
skills/gaming/minecraft-modpack-server
skills/gaming/pokemon-player
skills/social-media
skills/social-media/xitter
skills/devops
skills/devops/webhook-subscriptions
skills/leisure
skills/leisure/find-nearby
skills/leisure/find-nearby/scripts
skills/data-science
skills/data-science/jupyter-live-kernel
skills/software-development
skills/software-development/test-driven-development
skills/software-development/systematic-debugging
skills/software-development/plan
skills/software-development/writing-plans
skills/software-development/requesting-code-review
skills/software-development/subagent-driven-development
skills/index-cache
skills/mlops
skills/mlops/research
skills/mlops/research/dspy
skills/mlops/vector-databases
skills/mlops/huggingface-hub
skills/mlops/training
skills/mlops/training/trl-fine-tuning
skills/mlops/training/grpo-rl-training
skills/mlops/training/peft
skills/mlops/training/axolotl
skills/mlops/training/pytorch-fsdp
skills/mlops/training/unsloth
skills/mlops/models
skills/mlops/models/audiocraft
skills/mlops/models/segment-anything
skills/mlops/models/clip
skills/mlops/models/stable-diffusion
skills/mlops/models/whisper
skills/mlops/cloud
skills/mlops/cloud/modal
skills/mlops/inference
skills/mlops/inference/guidance
skills/mlops/inference/gguf
skills/mlops/inference/vllm
skills/mlops/inference/obliteratus
skills/mlops/inference/llama-cpp
skills/mlops/inference/outlines
skills/mlops/evaluation
skills/mlops/evaluation/lm-evaluation-harness
skills/mlops/evaluation/weights-and-biases
skills/inference-sh
skills/mcp
skills/mcp/mcporter
skills/mcp/native-mcp
skills/gifs
skills/feeds
skills/diagramming
skills/github
skills/github/github-auth
skills/github/github-auth/scripts
skills/github/github-repo-management
skills/github/github-repo-management/references
skills/github/github-pr-workflow
skills/github/github-pr-workflow/references
skills/github/github-pr-workflow/templates
skills/github/github-code-review
skills/github/github-code-review/references
skills/github/github-issues
skills/github/github-issues/templates
skills/github/codebase-inspection
skills/note-taking
skills/note-taking/obsidian
skills/red-teaming
skills/red-teaming/godmode
skills/red-teaming/godmode/references
skills/red-teaming/godmode/scripts
skills/red-teaming/godmode/templates
skills/creative
skills/creative/creative-ideation
skills/creative/creative-ideation/references
skills/creative/songwriting-and-ai-music
skills/creative/ascii-video
skills/creative/ascii-video/references
skills/creative/excalidraw
skills/creative/excalidraw/references
skills/creative/excalidraw/scripts
skills/creative/ascii-art
skills/creative/popular-web-designs
skills/creative/popular-web-designs/templates
skills/creative/manim-video
skills/creative/manim-video/references
skills/creative/manim-video/scripts
skills/creative/p5js
skills/creative/p5js/references
skills/creative/p5js/scripts
skills/creative/p5js/templates
skills/domain
skills/email
skills/email/himalaya
skills/email/himalaya/references
skills/smart-home
skills/smart-home/openhue
skills/autonomous-ai-agents
skills/autonomous-ai-agents/claude-code
skills/autonomous-ai-agents/codex
skills/autonomous-ai-agents/hermes-agent
skills/autonomous-ai-agents/opencode
skills/dogfood
skills/dogfood/references
skills/dogfood/templates
skills/productivity
skills/productivity/nano-pdf
skills/productivity/linear
skills/productivity/notion
skills/productivity/notion/references
skills/productivity/powerpoint
skills/productivity/powerpoint/scripts
skills/productivity/google-workspace
skills/productivity/google-workspace/references
skills/productivity/google-workspace/scripts
skills/productivity/ocr-and-documents
skills/productivity/ocr-and-documents/scripts
skills/media
skills/media/youtube-content
skills/media/youtube-content/references
skills/media/youtube-content/scripts
skills/media/heartmula
skills/media/songsee
skills/media/gif-search
```

## Assistant

**Result** (success):
```
111:skills/creative/creative-ideation/references/full-prompt-library.md
112:skills/creative/creative-ideation/SKILL.md
113:skills/creative/songwriting-and-ai-music/SKILL.md
114:skills/creative/ascii-video/references/scenes.md
115:skills/creative/ascii-video/references/architecture.md
116:skills/creative/ascii-video/references/inputs.md
117:skills/creative/ascii-video/references/troubleshooting.md
118:skills/creative/ascii-video/references/shaders.md
119:skills/creative/ascii-video/references/composition.md
120:skills/creative/ascii-video/references/effects.md
121:skills/creative/ascii-video/references/optimization.md
122:skills/creative/ascii-video/README.md
123:skills/creative/ascii-video/SKILL.md
124:skills/creative/excalidraw/references/examples.md
125:skills/creative/excalidraw/references/colors.md
126:skills/creative/excalidraw/references/dark-mode.md
127:skills/creative/excalidraw/scripts/upload.py
128:skills/creative/excalidraw/SKILL.md
129:skills/creative/ascii-art/SKILL.md
130:skills/creative/DESCRIPTION.md
131:skills/creative/popular-web-designs/SKILL.md
132:skills/creative/popular-web-designs/templates/framer.md
133:skills/creative/popular-web-designs/templates/resend.md
134:skills/creative/popular-web-designs/templates/warp.md
135:skills/creative/popular-web-designs/templates/clay.md
136:skills/creative/popular-web-designs/templates/wise.md
137:skills/creative/popular-web-designs/templates/uber.md
138:skills/creative/popular-web-designs/templates/composio.md
139:skills/creative/popular-web-designs/templates/miro.md
140:skills/creative/popular-web-designs/templates/apple.md
141:skills/creative/popular-web-designs/templates/figma.md
142:skills/creative/popular-web-designs/templates/ollama.md
143:skills/creative/popular-web-designs/templates/superhuman.md
144:skills/creative/popular-web-designs/templates/intercom.md
145:skills/creative/popular-web-designs/templates/cursor.md
146:skills/creative/popular-web-designs/templates/airbnb.md
147:skills/creative/popular-web-designs/templates/raycast.md
148:skills/creative/popular-web-designs/templates/linear.app.md
149:skills/creative/popular-web-designs/templates/together.ai.md
150:skills/creative/popular-web-designs/templates/notion.md
151:skills/creative/popular-web-designs/templates/elevenlabs.md
152:skills/creative/popular-web-designs/templates/spotify.md
153:skills/creative/popular-web-designs/templates/cohere.md
154:skills/creative/popular-web-designs/templates/stripe.md
155:skills/creative/popular-web-designs/templates/sentry.md
156:skills/creative/popular-web-designs/templates/expo.md
157:skills/creative/popular-web-designs/templates/vercel.md
158:skills/creative/popular-web-designs/templates/ibm.md
159:skills/creative/popular-web-designs/templates/clickhouse.md
160:skills/creative/popular-web-designs/templates/x.ai.md
161:skills/creative/popular-web-designs/templates/kraken.md
162:skills/creative/popular-web-designs/templates/runwayml.md
163:skills/creative/popular-web-designs/templates/zapier.md
164:skills/creative/popular-web-designs/templates/supabase.md
165:skills/creative/popular-web-designs/templates/minimax.md
166:skills/creative/popular-web-designs/templates/webflow.md
167:skills/creative/popular-web-designs/templates/opencode.ai.md
168:skills/creative/popular-web-designs/templates/mintlify.md
169:skills/creative/popular-web-designs/templates/spacex.md
170:skills/creative/popular-web-designs/templates/mistral.ai.md
171:skills/creative/popular-web-designs/templates/nvidia.md
172:skills/creative/popular-web-designs/templates/sanity.md
173:skills/creative/popular-web-designs/templates/pinterest.md
174:skills/creative/popular-web-designs/templates/bmw.md
175:skills/creative/popular-web-designs/templates/revolut.md
176:skills/creative/popular-web-designs/templates/lovable.md
177:skills/creative/popular-web-designs/templates/posthog.md
178:skills/creative/popular-web-designs/templates/voltagent.md
179:skills/creative/popular-web-designs/templates/coinbase.md
180:skills/creative/popular-web-designs/templates/cal.md
181:skills/creative/popular-web-designs/templates/claude.md
182:skills/creative/popular-web-designs/templates/mongodb.md
183:skills/creative/popular-web-designs/templates/airtable.md
184:skills/creative/popular-web-designs/templates/replicate.md
185:skills/creative/popular-web-designs/templates/hashicorp.md
186:skills/creative/manim-video/references/scene-planning.md
187:skills/creative/manim-video/references/production-quality.md
188:skills/creative/manim-video/references/decorations.md
189:skills/creative/manim-video/references/troubleshooting.md
190:skills/creative/manim-video/references/camera-and-3d.md
191:skills/creative/manim-video/references/equations.md
192:skills/creative/manim-video/references/graphs-and-data.md
193:skills/creative/manim-video/references/visual-design.md
194:skills/creative/manim-video/references/rendering.md
195:skills/creative/manim-video/references/animation-design-thinking.md
196:skills/creative/manim-video/references/animations.md
197:skills/creative/manim-video/references/mobjects.md
198:skills/creative/manim-video/references/updaters-and-trackers.md
199:skills/creative/manim-video/references/paper-explainer.md
200:skills/creative/manim-video/README.md
201:skills/creative/manim-video/scripts/setup.sh
202:skills/creative/manim-video/SKILL.md
203:skills/creative/p5js/references/animation.md
204:skills/creative/p5js/references/troubleshooting.md
205:skills/creative/p5js/references/shapes-and-geometry.md
206:skills/creative/p5js/references/export-pipeline.md
207:skills/creative/p5js/references/core-api.md
208:skills/creative/p5js/references/interaction.md
209:skills/creative/p5js/references/typography.md
210:skills/creative/p5js/references/color-systems.md
211:skills/creative/p5js/references/visual-effects.md
212:skills/creative/p5js/references/webgl-and-3d.md
213:skills/creative/p5js/README.md
214:skills/creative/p5js/scripts/setup.sh
215:skills/creative/p5js/scripts/export-frames.js
216:skills/creative/p5js/scripts/render.sh
217:skills/creative/p5js/scripts/serve.sh
218:skills/creative/p5js/SKILL.md
219:skills/creative/p5js/templates/viewer.html
```

## Assistant

**Result** (success):
```
---
name: manim-video
description: "Production pipeline for mathematical and technical animations using Manim Community Edition. Creates 3Blue1Brown-style explainer videos, algorithm visualizations, equation derivations, architecture diagrams, and data stories. Use when users request: animated explanations, math animations, concept visualizations, algorithm walkthroughs, technical explainers, 3Blue1Brown style videos, or any programmatic animation with geometric/mathematical content."
version: 1.0.0
---

# Manim Video Production Pipeline

## Creative Standard

This is educational cinema. Every frame teaches. Every animation reveals structure.

**Before writing a single line of code**, articulate the narrative arc. What misconception does this correct? What is the "aha moment"? What visual story takes the viewer from confusion to understanding? The user's prompt is a starting point — interpret it with pedagogical ambition.

**Geometry before algebra.** Show the shape first, the equation second. Visual memory encodes faster than symbolic memory. When the viewer sees the geometric pattern before the formula, the equation feels earned.

**First-render excellence is non-negotiable.** The output must be visually clear and aesthetically cohesive without revision rounds. If something looks cluttered, poorly timed, or like "AI-generated slides," it is wrong.

**Opacity layering directs attention.** Never show everything at full brightness. Primary elements at 1.0, contextual elements at 0.4, structural elements (axes, grids) at 0.15. The brain processes visual salience in layers.

**Breathing room.** Every animation needs `self.wait()` after it. The viewer needs time to absorb what just appeared. Never rush from one animation to the next. A 2-second pause after a key reveal is never wasted.

**Cohesive visual language.** All scenes share a color palette, consistent typography sizing, matching animation speeds. A technically correct video where every scene uses random different colors is an aesthetic failure.

## Prerequisites

Run `scripts/setup.sh` to verify all dependencies. Requires: Python 3.10+, Manim Community Edition v0.20+ (`pip install manim`), LaTeX (`texlive-full` on Linux, `mactex` on macOS), and ffmpeg. Reference docs tested against Manim CE v0.20.1.

## Modes

| Mode | Input | Output | Reference |
|------|-------|--------|-----------|
| **Concept explainer** | Topic/concept | Animated explanation with geometric intuition | `references/scene-planning.md` |
| **Equation derivation** | Math expressions | Step-by-step animated proof | `references/equations.md` |
| **Algorithm visualization** | Algorithm description | Step-by-step execution with data structures | `references/graphs-and-data.md` |
| **Data story** | Data/metrics | Animated charts, comparisons, counters | `references/graphs-and-data.md` |
| **Architecture diagram** | System description | Components building up with connections | `references/mobjects.md` |
| **Paper explainer** | Research paper | Key findings and methods animated | `references/scene-planning.md` |
| **3D visualization** | 3D concept | Rotating surfaces, parametric curves, spatial geometry | `references/camera-and-3d.md` |

## Stack

Single Python script per project. No browser, no Node.js, no GPU required.

| Layer | Tool | Purpose |
|-------|------|---------|
| Core | Manim Community Edition | Scene rendering, animation engine |
| Math | LaTeX (texlive/MiKTeX) | Equation rendering via `MathTex` |
| Video I/O | ffmpeg | Scene stitching, format conversion, audio muxing |
| TTS | ElevenLabs / Qwen3-TTS (optional) | Narration voiceover |

## Pipeline

```
PLAN --> CODE --> RENDER --> STITCH --> AUDIO (optional) --> REVIEW
```

1. **PLAN** — Write `plan.md` with narrative arc, scene list, visual elements, color palette, voiceover script
2. **CODE** — Write `script.py` with one class per scene, each independently renderable
3. **RENDER** — `manim -ql script.py Scene1 Scene2 ...` for draft, `-qh` for production
4. **STITCH** — ffmpeg concat of scene clips into `final.mp4`
5. **AUDIO** (optional) — Add voiceover and/or background music via ffmpeg. See `references/rendering.md`
6. **REVIEW** — Render preview stills, verify against plan, adjust

## Project Structure

```
project-name/
  plan.md                # Narrative arc, scene breakdown
  script.py              # All scenes in one file
  concat.txt             # ffmpeg scene list
  final.mp4              # Stitched output
  media/                 # Auto-generated by Manim
    videos/script/480p15/
```

## Creative Direction

### Color Palettes

| Palette | Background | Primary | Secondary | Accent | Use case |
|---------|-----------|---------|-----------|--------|----------|
| **Classic 3B1B** | `#1C1C1C` | `#58C4DD` (BLUE) | `#83C167` (GREEN) | `#FFFF00` (YELLOW) | General math/CS |
| **Warm academic** | `#2D2B55` | `#FF6B6B` | `#FFD93D` | `#6BCB77` | Approachable |
| **Neon tech** | `#0A0A0A` | `#00F5FF` | `#FF00FF` | `#39FF14` | Systems, architecture |
| **Monochrome** | `#1A1A2E` | `#EAEAEA` | `#888888` | `#FFFFFF` | Minimalist |

### Animation Speed

| Context | run_time | self.wait() after |
|---------|----------|-------------------|
| Title/intro appear | 1.5s | 1.0s |
| Key equation reveal | 2.0s | 2.0s |
| Transform/morph | 1.5s | 1.5s |
| Supporting label | 0.8s | 0.5s |
| FadeOut cleanup | 0.5s | 0.3s |
| "Aha moment" reveal | 2.5s | 3.0s |

### Typography Scale

| Role | Font size | Usage |
|------|-----------|-------|
| Title | 48 | Scene titles, opening text |
| Heading | 36 | Section headers within a scene |
| Body | 30 | Explanatory text |
| Label | 24 | Annotations, axis labels |
| Caption | 20 | Subtitles, fine print |

### Fonts

**Use monospace fonts for all text.** Manim's Pango renderer produces broken kerning with proportional fonts at all sizes. See `references/visual-design.md` for full recommendations.

```python
MONO = "Menlo"  # define once at top of file

Text("Fourier Series", font_size=48, font=MONO, weight=BOLD)  # titles
Text("n=1: sin(x)", font_size=20, font=MONO)                  # labels
MathTex(r"\nabla L")                                            # math (uses LaTeX)
```

Minimum `font_size=18` for readability.

### Per-Scene Variation

Never use identical config for all scenes. For each scene:
- **Different dominant color** from the palette
- **Different layout** — don't always center everything
- **Different animation entry** — vary between Write, FadeIn, GrowFromCenter, Create
- **Different visual weight** — some scenes dense, others sparse

## Workflow

### Step 1: Plan (plan.md)

Before any code, write `plan.md`. See `references/scene-planning.md` for the comprehensive template.

### Step 2: Code (script.py)

One class per scene. Every scene is independently renderable.

```python
from manim import *

BG = "#1C1C1C"
PRIMARY = "#58C4DD"
SECONDARY = "#83C167"
ACCENT = "#FFFF00"
MONO = "Menlo"

class Scene1_Introduction(Scene):
    def construct(self):
        self.camera.background_color = BG
        title = Text("Why Does This Work?", font_size=48, color=PRIMARY, weight=BOLD, font=MONO)
        self.add_subcaption("Why does this work?", duration=2)
        self.play(Write(title), run_time=1.5)
        self.wait(1.0)
        self.play(FadeOut(title), run_time=0.5)
```

Key patterns:
- **Subtitles** on every animation: `self.add_subcaption("text", duration=N)` or `subcaption="text"` on `self.play()`
- **Shared color constants** at file top for cross-scene consistency
- **`self.camera.background_color`** set in every scene
- **Clean exits** — FadeOut all mobjects at scene end: `self.play(FadeOut(Group(*self.mobjects)))`

### Step 3: Render

```bash
manim -ql script.py Scene1_Introduction Scene2_CoreConcept  # draft
manim -qh script.py Scene1_Introduction Scene2_CoreConcept  # production
```

### Step 4: Stitch

```bash
cat > concat.txt << 'EOF'
file 'media/videos/script/480p15/Scene1_Introduction.mp4'
file 'media/videos/script/480p15/Scene2_CoreConcept.mp4'
EOF
ffmpeg -y -f concat -safe 0 -i concat.txt -c copy final.mp4
```

### Step 5: Review

```bash
manim -ql --format=png -s script.py Scene2_CoreConcept  # preview still
```

## Critical Implementation Notes

### Raw Strings for LaTeX
```python
# WRONG: MathTex("\frac{1}{2}")
# RIGHT:
MathTex(r"\frac{1}{2}")
```

### buff >= 0.5 for Edge Text
```python
label.to_edge(DOWN, buff=0.5)  # never < 0.5
```

### FadeOut Before Replacing Text
```python
self.play(ReplacementTransform(note1, note2))  # not Write(note2) on top
```

### Never Animate Non-Added Mobjects
```python
self.play(Create(circle))  # must add first
self.play(circle.animate.set_color(RED))  # then animate
```

## Performance Targets

| Quality | Resolution | FPS | Speed |
|---------|-----------|-----|-------|
| `-ql` (draft) | 854x480 | 15 | 5-15s/scene |
| `-qm` (medium) | 1280x720 | 30 | 15-60s/scene |
| `-qh` (production) | 1920x1080 | 60 | 30-120s/scene |

Always iterate at `-ql`. Only render `-qh` for final output.

## References

| File | Contents |
|------|----------|
| `references/animations.md` | Core animations, rate functions, composition, `.animate` syntax, timing patterns |
| `references/mobjects.md` | Text, shapes, VGroup/Group, positioning, styling, custom mobjects |
| `references/visual-design.md` | 12 design principles, opacity layering, layout templates, color palettes |
| `references/equations.md` | LaTeX in Manim, TransformMatchingTex, derivation patterns |
| `references/graphs-and-data.md` | Axes, plotting, BarChart, animated data, algorithm visualization |
| `references/camera-and-3d.md` | MovingCameraScene, ThreeDScene, 3D surfaces, camera control |
| `references/scene-planning.md` | Narrative arcs, layout templates, scene transitions, planning template |
| `references/rendering.md` | CLI reference, quality presets, ffmpeg, voiceover workflow, GIF export |
| `references/troubleshooting.md` | LaTeX errors, animation errors, common mistakes, debugging |
| `references/animation-design-thinking.md` | When to animate vs show static, decomposition, pacing, narration sync |
| `references/updaters-and-trackers.md` | ValueTracker, add_updater, always_redraw, time-based updaters, patterns |
| `references/paper-explainer.md` | Turning research papers into animations — workflow, templates, domain patterns |
| `references/decorations.md` | SurroundingRectangle, Brace, arrows, DashedLine, Angle, annotation lifecycle |
| `references/production-quality.md` | Pre-code, pre-render, post-render checklists, spatial layout, color, tempo |

---

## Creative Divergence (use only when user requests experimental/creative/unique output)

If the user asks for creative, experimental, or unconventional explanatory approaches, select a strategy and reason through it BEFORE designing the animation.

- **SCAMPER** — when the user wants a fresh take on a standard explanation
- **Assumption Reversal** — when the user wants to challenge how something is typically taught

### SCAMPER Transformation
Take a standard mathematical/technical visualization and transform it:
- **Substitute**: replace the standard visual metaphor (number line → winding path, matrix → city grid)
- **Combine**: merge two explanation approaches (algebraic + geometric simultaneously)
- **Reverse**: derive backward — start from the result and deconstruct to axioms
- **Modify**: exaggerate a parameter to show why it matters (10x the learning rate, 1000x the sample size)
- **Eliminate**: remove all notation — explain purely through animation and spatial relationships

### Assumption Reversal
1. List what's "standard" about how this topic is visualized (left-to-right, 2D, discrete steps, formal notation)
2. Pick the most fundamental assumption
3. Reverse it (right-to-left derivation, 3D embedding of a 2D concept, continuous morphing instead of steps, zero notation)
4. Explore what the reversal reveals that the standard approach hides
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "rg -n \"resources_discover|skillPaths\" /opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions -S"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/skills.ts"
}
```

## Assistant

**Result** (success):
```
/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/dynamic-resources/index.ts:8:	pi.on("resources_discover", () => {
/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/dynamic-resources/index.ts:10:			skillPaths: [join(baseDir, "SKILL.md")],
/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/dynamic-resources/SKILL.md:3:description: Example skill loaded from resources_discover
/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/dynamic-resources/dynamic.md:2:description: Example prompt template loaded from resources_discover
/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/README.md:102:| `dynamic-resources/` | Loads skills, prompts, and themes using `resources_discover` |
```

## Assistant

**Result** (error):
```
ENOENT: no such file or directory, access '/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/skills.ts'
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/dynamic-resources/index.ts"
}
```

## Assistant

**Result** (success):
```
import { dirname, join } from "node:path";
import { fileURLToPath } from "node:url";
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";

const baseDir = dirname(fileURLToPath(import.meta.url));

export default function (pi: ExtensionAPI) {
	pi.on("resources_discover", () => {
		return {
			skillPaths: [join(baseDir, "SKILL.md")],
			promptPaths: [join(baseDir, "dynamic.md")],
			themePaths: [join(baseDir, "dynamic.json")],
		};
	});
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/docs/skills.md"
}
```

## Assistant

**Result** (success):
```
> pi can create skills. Ask it to build one for your use case.

# Skills

Skills are self-contained capability packages that the agent loads on-demand. A skill provides specialized workflows, setup instructions, helper scripts, and reference documentation for specific tasks.

Pi implements the [Agent Skills standard](https://agentskills.io/specification), warning about violations but remaining lenient.

## Table of Contents

- [Locations](#locations)
- [How Skills Work](#how-skills-work)
- [Skill Commands](#skill-commands)
- [Skill Structure](#skill-structure)
- [Frontmatter](#frontmatter)
- [Validation](#validation)
- [Example](#example)
- [Skill Repositories](#skill-repositories)

## Locations

> **Security:** Skills can instruct the model to perform any action and may include executable code the model invokes. Review skill content before use.

Pi loads skills from:

- Global:
  - `~/.pi/agent/skills/`
  - `~/.agents/skills/`
- Project:
  - `.pi/skills/`
  - `.agents/skills/` in `cwd` and ancestor directories (up to git repo root, or filesystem root when not in a repo)
- Packages: `skills/` directories or `pi.skills` entries in `package.json`
- Settings: `skills` array with files or directories
- CLI: `--skill <path>` (repeatable, additive even with `--no-skills`)

Discovery rules:
- In `~/.pi/agent/skills/` and `.pi/skills/`, direct root `.md` files are discovered as individual skills
- In all skill locations, directories containing `SKILL.md` are discovered recursively
- In `~/.agents/skills/` and project `.agents/skills/`, root `.md` files are ignored

Disable discovery with `--no-skills` (explicit `--skill` paths still load).

### Using Skills from Other Harnesses

To use skills from Claude Code or OpenAI Codex, add their directories to settings:

```json
{
  "skills": [
    "~/.claude/skills",
    "~/.codex/skills"
  ]
}
```

For project-level Claude Code skills, add to `.pi/settings.json`:

```json
{
  "skills": ["../.claude/skills"]
}
```

## How Skills Work

1. At startup, pi scans skill locations and extracts names and descriptions
2. The system prompt includes available skills in XML format per the [specification](https://agentskills.io/integrate-skills)
3. When a task matches, the agent uses `read` to load the full SKILL.md (models don't always do this; use prompting or `/skill:name` to force it)
4. The agent follows the instructions, using relative paths to reference scripts and assets

This is progressive disclosure: only descriptions are always in context, full instructions load on-demand.

## Skill Commands

Skills register as `/skill:name` commands:

```bash
/skill:brave-search           # Load and execute the skill
/skill:pdf-tools extract      # Load skill with arguments
```

Arguments after the command are appended to the skill content as `User: <args>`.

Toggle skill commands via `/settings` in interactive mode or in `settings.json`:

```json
{
  "enableSkillCommands": true
}
```

## Skill Structure

A skill is a directory with a `SKILL.md` file. Everything else is freeform.

```
my-skill/
├── SKILL.md              # Required: frontmatter + instructions
├── scripts/              # Helper scripts
│   └── process.sh
├── references/           # Detailed docs loaded on-demand
│   └── api-reference.md
└── assets/
    └── template.json
```

### SKILL.md Format

```markdown
---
name: my-skill
description: What this skill does and when to use it. Be specific.
---

# My Skill

## Setup

Run once before first use:
\`\`\`bash
cd /path/to/skill && npm install
\`\`\`

## Usage

\`\`\`bash
./scripts/process.sh <input>
\`\`\`
```

Use relative paths from the skill directory:

```markdown
See [the reference guide](references/REFERENCE.md) for details.
```

## Frontmatter

Per the [Agent Skills specification](https://agentskills.io/specification#frontmatter-required):

| Field | Required | Description |
|-------|----------|-------------|
| `name` | Yes | Max 64 chars. Lowercase a-z, 0-9, hyphens. Must match parent directory. |
| `description` | Yes | Max 1024 chars. What the skill does and when to use it. |
| `license` | No | License name or reference to bundled file. |
| `compatibility` | No | Max 500 chars. Environment requirements. |
| `metadata` | No | Arbitrary key-value mapping. |
| `allowed-tools` | No | Space-delimited list of pre-approved tools (experimental). |
| `disable-model-invocation` | No | When `true`, skill is hidden from system prompt. Users must use `/skill:name`. |

### Name Rules

- 1-64 characters
- Lowercase letters, numbers, hyphens only
- No leading/trailing hyphens
- No consecutive hyphens
- Must match parent directory name

Valid: `pdf-processing`, `data-analysis`, `code-review`
Invalid: `PDF-Processing`, `-pdf`, `pdf--processing`

### Description Best Practices

The description determines when the agent loads the skill. Be specific.

Good:
```yaml
description: Extracts text and tables from PDF files, fills PDF forms, and merges multiple PDFs. Use when working with PDF documents.
```

Poor:
```yaml
description: Helps with PDFs.
```

## Validation

Pi validates skills against the Agent Skills standard. Most issues produce warnings but still load the skill:

- Name doesn't match parent directory
- Name exceeds 64 characters or contains invalid characters
- Name starts/ends with hyphen or has consecutive hyphens
- Description exceeds 1024 characters

Unknown frontmatter fields are ignored.

**Exception:** Skills with missing description are not loaded.

Name collisions (same name from different locations) warn and keep the first skill found.

## Example

```
brave-search/
├── SKILL.md
├── search.js
└── content.js
```

**SKILL.md:**
```markdown
---
name: brave-search
description: Web search and content extraction via Brave Search API. Use for searching documentation, facts, or any web content.
---

# Brave Search

## Setup

\`\`\`bash
cd /path/to/brave-search && npm install
\`\`\`

## Search

\`\`\`bash
./search.js "query"              # Basic search
./search.js "query" --content    # Include page content
\`\`\`

## Extract Page Content

\`\`\`bash
./content.js https://example.com
\`\`\`
```

## Skill Repositories

- [Anthropic Skills](https://github.com/anthropics/skills) - Document processing (docx, pdf, pptx, xlsx), web development
- [Pi Skills](https://github.com/badlogic/pi-skills) - Web search, browser automation, Google APIs, transcription
```

## User

lets not use @skills/creative/manim-video/scripts/setup.sh directly. 
the requirements are installed. lets use a lighter version that will use uv for the python and the manim executatble
and then checks for the other two, but just asks the user to install manually instead of just installing shit 

we'll model this as a simple pi extension that works with /manim



given these instructions, can you port everything under @skills/creative/manim-video/ to a new pi extension? do it in a subrepo in ~/dev/
use `pnpm dlx gh-axi` to create a new repo and then push to it
call it pi-manim-video
install it via `pi install git:[REDACTED]:user/repo` 
and then `pi update <same string>` to update as we patch

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "skills/creative/manim-video/README.md"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "skills/creative/manim-video/scripts/setup.sh"
}
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "find skills/creative/manim-video -maxdepth 3 -type f | sort"
}
```

## Assistant

**Result** (success):
```
# Manim Video Skill

Production pipeline for mathematical and technical animations using [Manim Community Edition](https://www.manim.community/).

## What it does

Creates 3Blue1Brown-style animated videos from text prompts. The agent handles the full pipeline: creative planning, Python code generation, rendering, scene stitching, and iterative refinement.

## Use cases

- **Concept explainers** — "Explain how neural networks learn"
- **Equation derivations** — "Animate the proof of the Pythagorean theorem"
- **Algorithm visualizations** — "Show how quicksort works step by step"
- **Data stories** — "Animate our before/after performance metrics"
- **Architecture diagrams** — "Show our microservice architecture building up"

## Prerequisites

Python 3.10+, Manim CE (`pip install manim`), LaTeX, ffmpeg.

```bash
bash skills/creative/manim-video/scripts/setup.sh
```
```

## Assistant

**Result** (success):
```
#!/usr/bin/env bash
set -euo pipefail
G="\033[0;32m"; R="\033[0;31m"; N="\033[0m"
ok() { echo -e "  ${G}+${N} $1"; }
fail() { echo -e "  ${R}x${N} $1"; }
echo ""; echo "Manim Video Skill — Setup Check"; echo ""
errors=0
command -v python3 &>/dev/null && ok "Python $(python3 --version 2>&1 | awk '{print $2}')" || { fail "Python 3 not found"; errors=$((errors+1)); }
python3 -c "import manim" 2>/dev/null && ok "Manim $(manim --version 2>&1 | head -1)" || { fail "Manim not installed: pip install manim"; errors=$((errors+1)); }
command -v pdflatex &>/dev/null && ok "LaTeX (pdflatex)" || { fail "LaTeX not found (macOS: brew install --cask mactex-no-gui)"; errors=$((errors+1)); }
command -v ffmpeg &>/dev/null && ok "ffmpeg" || { fail "ffmpeg not found"; errors=$((errors+1)); }
echo ""
[ $errors -eq 0 ] && echo -e "${G}All prerequisites satisfied.${N}" || echo -e "${R}$errors prerequisite(s) missing.${N}"
echo ""
```

## Assistant

**Result** (success):
```
skills/creative/manim-video/README.md
skills/creative/manim-video/references/animation-design-thinking.md
skills/creative/manim-video/references/animations.md
skills/creative/manim-video/references/camera-and-3d.md
skills/creative/manim-video/references/decorations.md
skills/creative/manim-video/references/equations.md
skills/creative/manim-video/references/graphs-and-data.md
skills/creative/manim-video/references/mobjects.md
skills/creative/manim-video/references/paper-explainer.md
skills/creative/manim-video/references/production-quality.md
skills/creative/manim-video/references/rendering.md
skills/creative/manim-video/references/scene-planning.md
skills/creative/manim-video/references/troubleshooting.md
skills/creative/manim-video/references/updaters-and-trackers.md
skills/creative/manim-video/references/visual-design.md
skills/creative/manim-video/scripts/setup.sh
skills/creative/manim-video/SKILL.md
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "pnpm dlx gh-axi --help",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
usage: gh-axi [command] [args] [flags]
commands[10]:
  (none)=dashboard, issue, pr, run, workflow, release, repo, label, search, api
flags[3]:
  -R/--repo <OWNER/NAME> (after command), --help, -v/-V/--version
examples:
  gh-axi
  gh-axi issue list --state open
  gh-axi issue list -R owner/name
  gh-axi issue list --repo owner/name
  gh-axi pr view 42
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "pnpm dlx gh-axi repo --help",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
usage: gh-axi repo <subcommand> [flags]
subcommands[6]:
  view, create <name>, edit, clone <repo>, fork [repo], list [owner]
flags{create}:
  --public, --private, --internal, --description, --clone, --template
flags{edit}:
  --description, --visibility, --default-branch, --enable-issues, --enable-wiki
flags{fork}:
  --clone, --remote
flags{list}:
  --limit <n> (default 30), --visibility, --language, --archived
examples:
  gh-axi repo view
  gh-axi repo create my-project --public --description "A new project"
  gh-axi repo list --visibility public --language TypeScript
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "gh auth status",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
github.com
  ✓ Logged in to github.com account maujim (keyring)
  - Active account: true
  - Git operations protocol: ssh
  - Token: gho_************************************
  - Token scopes: 'admin:public_key', 'gist', 'read:org', 'repo'
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "ls -la ~/dev"
}
```

## Assistant

**Result** (success):
```
total 24
drwxr-xr-x@  60 mukund  staff   1920 Apr 11 02:34 .
drwxr-x---+ 148 mukund  staff   4736 Apr 11 02:00 ..
-rw-r--r--@   1 mukund  staff  10244 Jan 23 02:41 .DS_Store
drwxr-xr-x@  24 mukund  staff    768 Dec 29 04:12 2405.15943
drwxr-xr-x@  17 mukund  staff    544 Dec 18 00:33 agent
drwxr-xr-x@   7 mukund  staff    224 Oct 23 17:50 antibody-mvp
drwxr-xr-x@  19 mukund  staff    608 Dec 31 02:12 arxiv-search
drwxr-xr-x@  13 mukund  staff    416 Dec 18 04:09 ascii-shader
drwxr-xr-x@  10 mukund  staff    320 Dec 24 00:22 baguettotron-brie
drwxr-xr-x@  12 mukund  staff    384 Jan 22 14:06 bookshelf-3d
drwxr-xr-x@  20 mukund  staff    640 Jan  5 19:49 browser-use-test
drwxr-xr-x@  26 mukund  staff    832 Dec 17 23:41 config
drwxr-xr-x@   6 mukund  staff    192 Jan  3 23:15 config.ai
drwxr-xr-x@   6 mukund  staff    192 Dec 10 00:38 crafting-interpreters
drwxr-xr-x@  18 mukund  staff    576 Jan  4 02:14 DensePose-ESP32
drwxr-xr-x@  20 mukund  staff    640 Mar 18 03:43 depth-anything-3
drwxr-xr-x@  35 mukund  staff   1120 Apr  5 17:21 dimos
drwxr-xr-x@  17 mukund  staff    544 Dec 31 01:23 discord-bot
drwxr-xr-x@   2 mukund  staff     64 Dec  4  2023 env
drwxr-xr-x@  48 mukund  staff   1536 Dec 31 06:42 esp-idf
drwxr-xr-x@  47 mukund  staff   1504 Apr  5 14:26 exo
drwxr-xr-x@   6 mukund  staff    192 Dec 22 18:37 game-of-life
drwxr-xr-x@  37 mukund  staff   1184 Oct 22 16:35 ghostty-shaders
drwxr-xr-x@  13 mukund  staff    416 Apr  5 16:56 grove
drwxr-xr-x@   7 mukund  staff    224 Apr  5 01:28 hermes-home
drwxr-xr-x@  20 mukund  staff    640 Nov 18 09:09 jefferson-demo
drwxr-xr-x@  25 mukund  staff    800 Dec 22 18:22 llamafile
drwxr-xr-x@  19 mukund  staff    608 Mar 28 22:43 M2T2
drwxr-xr-x@  16 mukund  staff    512 Jan 14 21:15 maujim.github.io
drwxr-xr-x@   2 mukund  staff     64 Jan 22 19:11 ml
drwxr-xr-x@   8 mukund  staff    256 Mar 18 00:47 mockshell
drwxr-xr-x@   8 mukund  staff    256 Mar 11 16:02 myfirstevals
drwxr-xr-x@  21 mukund  staff    672 Feb  5 21:28 nanochat
drwxr-xr-x@  90 mukund  staff   2880 Jan 17 21:38 notebooks
drwxr-xr-x@  14 mukund  staff    448 Mar 20 06:06 parameter-golf
drwxr-xr-x@   9 mukund  staff    288 Mar 15 00:48 pi-extension-01
drwxr-xr-x@   8 mukund  staff    256 Apr 11 02:03 pi-local-model-discovery
drwxr-xr-x@  21 mukund  staff    672 Apr  5 03:44 pi-mono
drwxr-xr-x@   5 mukund  staff    160 Apr  8 02:14 pi-thinking
drwx------@  15 mukund  staff    480 Jan 20 23:06 pocket-tts-extension
drwxr-xr-x@  10 mukund  staff    320 Nov  2 23:37 prompts-extension
drwxr-xr-x@   2 mukund  staff     64 Nov 18 12:59 protein-viewer
drwxr-xr-x@   3 mukund  staff     96 Jan  3 01:47 ralph-details
drwxr-xr-x@  14 mukund  staff    448 Dec 10 00:38 raytracing-book
drwxr-xr-x@   3 mukund  staff     96 Apr  5 18:57 robot-1
drwxr-xr-x@  12 mukund  staff    384 Jan 28 02:15 sharp-film
drwxr-xr-x@  17 mukund  staff    544 Nov 12 17:06 sketch-threejs
drwxr-xr-x@  10 mukund  staff    320 Jan  3 23:25 skills
drwxr-xr-x@  10 mukund  staff    320 Mar 14 23:55 slate-test
drwxr-xr-x@  19 mukund  staff    608 Jan 11 00:52 spfresh-mini
drwxr-xr-x@  20 mukund  staff    640 Dec  1 11:13 tigris-bio-website
drwxr-xr-x@   7 mukund  staff    224 Jan 22 17:22 tmp
drwxr-xr-x@  17 mukund  staff    544 Jan  3 23:46 tpuf2
drwxr-xr-x@   3 mukund  staff     96 Apr  8 15:18 traces
drwxr-xr-x@   5 mukund  staff    160 Jan 23 03:19 traffic-light-pcb
drwxr-xr-x@   6 mukund  staff    192 Oct 22 16:14 tsfm
drwxr-xr-x@  13 mukund  staff    416 Jan 20 23:53 twitter-article-narrator
drwxr-xr-x@   9 mukund  staff    288 Nov  6 14:20 twitter-header-hider
drwxr-xr-x@   8 mukund  staff    256 Jan 27 13:34 vector-db-python
drwxr-xr-x@  15 mukund  staff    480 Dec 19 03:43 vim-markdown-composer
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "find ~/dev/pi-extension-01 -maxdepth 3 -type f | sort"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/Users/mukund/dev/pi-extension-01/package.json"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/Users/mukund/dev/pi-extension-01/README.md"
}
```

## Assistant

**Result** (success):
```
/Users/mukund/dev/pi-extension-01/.git/COMMIT_EDITMSG
/Users/mukund/dev/pi-extension-01/.git/config
/Users/mukund/dev/pi-extension-01/.git/description
/Users/mukund/dev/pi-extension-01/.git/HEAD
/Users/mukund/dev/pi-extension-01/.git/hooks/applypatch-msg.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/commit-msg.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/fsmonitor-watchman.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/post-update.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/pre-applypatch.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/pre-commit.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/pre-merge-commit.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/pre-push.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/pre-rebase.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/pre-receive.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/prepare-commit-msg.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/push-to-checkout.sample
/Users/mukund/dev/pi-extension-01/.git/hooks/update.sample
/Users/mukund/dev/pi-extension-01/.git/index
/Users/mukund/dev/pi-extension-01/.git/info/exclude
/Users/mukund/dev/pi-extension-01/.git/logs/HEAD
/Users/mukund/dev/pi-extension-01/.pi/weave/2050bd13-4ad9-41d9-b541-3158b7479841.json
/Users/mukund/dev/pi-extension-01/.pi/weave/464f9d01-4221-4b41-b88f-4f4595516da0.json
/Users/mukund/dev/pi-extension-01/.pi/weave/8d659621-92f1-44bc-8294-614fe153039b.json
/Users/mukund/dev/pi-extension-01/.pi/weave/9314e9b4-cdd1-481f-b50a-163339435fbb.json
/Users/mukund/dev/pi-extension-01/.pi/weave/a839288f-12a7-4154-ad45-6de899530c7f.json
/Users/mukund/dev/pi-extension-01/.pi/weave/af5fbf53-a6ba-4705-987a-8e7e6c46e0b5.json
/Users/mukund/dev/pi-extension-01/package.json
/Users/mukund/dev/pi-extension-01/README.md
/Users/mukund/dev/pi-extension-01/robot-harness-extension/docs/4-outlines.md
/Users/mukund/dev/pi-extension-01/robot-harness-extension/package.json
/Users/mukund/dev/pi-extension-01/robot-harness-extension/README.md
/Users/mukund/dev/pi-extension-01/robot-harness-extension/research/T1_search_context.md
/Users/mukund/dev/pi-extension-01/robot-harness-extension/src/index.ts
/Users/mukund/dev/pi-extension-01/robot-harness-extension/src/types.ts
/Users/mukund/dev/pi-extension-01/robot-harness-extension/T4-validation-test-rollout-plan.md
/Users/mukund/dev/pi-extension-01/TODO_WEAVE_UI.md
/Users/mukund/dev/pi-extension-01/unified-prd.html
```

## Assistant

**Result** (success):
```
{
  "name": "@pi-labs/thread-weaving",
  "version": "0.1.0",
  "private": true,
  "description": "Pi Weave extension: thread weaving + episodes for pi",
  "keywords": [
    "pi-package"
  ],
  "pi": {
    "extensions": [
      "./extensions/pi-weave/index.ts"
    ]
  }
}
```

## Assistant

**Result** (success):
```
# Pi Weave (`@pi-labs/thread-weaving`)

Extension-only implementation of **Thread Weaving + Episodes** for pi.

## Features implemented

- `/weave "goal"` and `/weave start <goal>` now orchestrate **fanout(4)** threads (2 search + 2 coding) and then auto-run a synthesis thread
- `/weave fixture <path>` starts fanout from a saved goal/prompt file
- `/weave clear` clears/hides current weave panel state
- `/thread run <name> --goal "..." --input E1,E3 --kind search|coding|synthesis --model <hint>`
- `/episode list`, `/episode show <id>`, `/episode approve <id>`
- `/weave compose E1,E2 --goal "..."`
- `/weave export` (JSON + HTML lineage report)
- Canonical episode schema with low-confidence gating
- Episode persistence in `.pi/weave/<sessionId>.json`
- Episode custom messages in session tree (`customType: pi-weave-episode`)
- Focus Mode UI:
  - persistent status bar (`setStatus`)
  - live thread/episode panel widget (`setWidget`)
  - episode inspector overlay (`Ctrl+Alt+E`)
  - panel toggle shortcut (`Ctrl+Alt+W`)

## Load in pi

From this repo:

```bash
pi -e ./extensions/pi-weave/index.ts
```

Or install as a package and rely on `pi.extensions` manifest in `package.json`.

## Notes

- Thread execution uses agent prompts and expects the model to call `weave_emit_episode`.
- If the model does not emit a structured episode, a fallback low-confidence episode is auto-generated and flagged for review.
- Low-confidence episodes are blocked from `/weave compose` until approved.
- Reusable test prompts are in `robot-harness-extension/test-prompts/`.
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/docs/packages.md"
}
```

## Assistant

**Result** (success):
```
> pi can help you create pi packages. Ask it to bundle your extensions, skills, prompt templates, or themes.

# Pi Packages

Pi packages bundle extensions, skills, prompt templates, and themes so you can share them through npm or git. A package can declare resources in `package.json` under the `pi` key, or use conventional directories.

## Table of Contents

- [Install and Manage](#install-and-manage)
- [Package Sources](#package-sources)
- [Creating a Pi Package](#creating-a-pi-package)
- [Package Structure](#package-structure)
- [Dependencies](#dependencies)
- [Package Filtering](#package-filtering)
- [Enable and Disable Resources](#enable-and-disable-resources)
- [Scope and Deduplication](#scope-and-deduplication)

## Install and Manage

> **Security:** Pi packages run with full system access. Extensions execute arbitrary code, and skills can instruct the model to perform any action including running executables. Review source code before installing third-party packages.

```bash
pi install npm:@foo/bar@1.0.0
pi install git:github.com/user/repo@v1
pi install https://github.com/user/repo  # raw URLs work too
pi install /absolute/path/to/package
pi install ./relative/path/to/package

pi remove npm:@foo/bar
pi list    # show installed packages from settings
pi update  # update all non-pinned packages
```

By default, `install` and `remove` write to global settings (`~/.pi/agent/settings.json`). Use `-l` to write to project settings (`.pi/settings.json`) instead. Project settings can be shared with your team, and pi installs any missing packages automatically on startup.

To try a package without installing it, use `--extension` or `-e`. This installs to a temporary directory for the current run only:

```bash
pi -e npm:@foo/bar
pi -e git:github.com/user/repo
```

## Package Sources

Pi accepts three source types in settings and `pi install`.

### npm

```
npm:@scope/pkg@1.2.3
npm:pkg
```

- Versioned specs are pinned and skipped by `pi update`.
- Global installs use `npm install -g`.
- Project installs go under `.pi/npm/`.
- Set `npmCommand` in `settings.json` to pin npm package lookup and install operations to a specific wrapper command such as `mise` or `asdf`.

Example:

```json
{
  "npmCommand": ["mise", "exec", "node@20", "--", "npm"]
}
```

### git

```
git:github.com/user/repo@v1
git:[REDACTED]:user/repo@v1
https://github.com/user/repo@v1
ssh://git@github.com/user/repo@v1
```

- Without `git:` prefix, only protocol URLs are accepted (`https://`, `http://`, `ssh://`, `git://`).
- With `git:` prefix, shorthand formats are accepted, including `github.com/user/repo` and `[REDACTED]:user/repo`.
- HTTPS and SSH URLs are both supported.
- SSH URLs use your configured SSH keys automatically (respects `~/.ssh/config`).
- For non-interactive runs (for example CI), you can set `GIT_TERMINAL_PROMPT=0` to disable credential prompts and set `GIT_SSH_COMMAND` (for example `ssh -o BatchMode=yes -o ConnectTimeout=5`) to fail fast.
- Refs pin the package and skip `pi update`.
- Cloned to `~/.pi/agent/git/<host>/<path>` (global) or `.pi/git/<host>/<path>` (project).
- Runs `npm install` after clone or pull if `package.json` exists.

**SSH examples:**
```bash
# git@host:path shorthand (requires git: prefix)
pi install git:[REDACTED]:user/repo

# ssh:// protocol format
pi install ssh://git@github.com/user/repo

# With version ref
pi install git:[REDACTED]:user/repo@v1.0.0
```

### Local Paths

```
/absolute/path/to/package
./relative/path/to/package
```

Local paths point to files or directories on disk and are added to settings without copying. Relative paths are resolved against the settings file they appear in. If the path is a file, it loads as a single extension. If it is a directory, pi loads resources using package rules.

## Creating a Pi Package

Add a `pi` manifest to `package.json` or use conventional directories. Include the `pi-package` keyword for discoverability.

```json
{
  "name": "my-package",
  "keywords": ["pi-package"],
  "pi": {
    "extensions": ["./extensions"],
    "skills": ["./skills"],
    "prompts": ["./prompts"],
    "themes": ["./themes"]
  }
}
```

Paths are relative to the package root. Arrays support glob patterns and `!exclusions`.

### Gallery Metadata

The [package gallery](https://shittycodingagent.ai/packages) displays packages tagged with `pi-package`. Add `video` or `image` fields to show a preview:

```json
{
  "name": "my-package",
  "keywords": ["pi-package"],
  "pi": {
    "extensions": ["./extensions"],
    "video": "https://example.com/demo.mp4",
    "image": "https://example.com/screenshot.png"
  }
}
```

- **video**: MP4 only. On desktop, autoplays on hover. Clicking opens a fullscreen player.
- **image**: PNG, JPEG, GIF, or WebP. Displayed as a static preview.

If both are set, video takes precedence.

## Package Structure

### Convention Directories

If no `pi` manifest is present, pi auto-discovers resources from these directories:

- `extensions/` loads `.ts` and `.js` files
- `skills/` recursively finds `SKILL.md` folders and loads top-level `.md` files as skills
- `prompts/` loads `.md` files
- `themes/` loads `.json` files

## Dependencies

Third party runtime dependencies belong in `dependencies` in `package.json`. Dependencies that do not register extensions, skills, prompt templates, or themes also belong in `dependencies`. When pi installs a package from npm or git, it runs `npm install`, so those dependencies are installed automatically.

Pi bundles core packages for extensions and skills. If you import any of these, list them in `peerDependencies` with a `"*"` range and do not bundle them: `@mariozechner/pi-ai`, `@mariozechner/pi-agent-core`, `@mariozechner/pi-coding-agent`, `@mariozechner/pi-tui`, `@sinclair/typebox`.

Other pi packages must be bundled in your tarball. Add them to `dependencies` and `bundledDependencies`, then reference their resources through `node_modules/` paths. Pi loads packages with separate module roots, so separate installs do not collide or share modules.

Example:

```json
{
  "dependencies": {
    "shitty-extensions": "^1.0.1"
  },
  "bundledDependencies": ["shitty-extensions"],
  "pi": {
    "extensions": ["extensions", "node_modules/shitty-extensions/extensions"],
    "skills": ["skills", "node_modules/shitty-extensions/skills"]
  }
}
```

## Package Filtering

Filter what a package loads using the object form in settings:

```json
{
  "packages": [
    "npm:simple-pkg",
    {
      "source": "npm:my-package",
      "extensions": ["extensions/*.ts", "!extensions/legacy.ts"],
      "skills": [],
      "prompts": ["prompts/review.md"],
      "themes": ["+themes/legacy.json"]
    }
  ]
}
```

`+path` and `-path` are exact paths relative to the package root.

- Omit a key to load all of that type.
- Use `[]` to load none of that type.
- `!pattern` excludes matches.
- `+path` force-includes an exact path.
- `-path` force-excludes an exact path.
- Filters layer on top of the manifest. They narrow down what is already allowed.

## Enable and Disable Resources

Use `pi config` to enable or disable extensions, skills, prompt templates, and themes from installed packages and local directories. Works for both global (`~/.pi/agent`) and project (`.pi/`) scopes.

## Scope and Deduplication

Packages can appear in both global and project settings. If the same package appears in both, the project entry wins. Identity is determined by:

- npm: package name
- git: repository URL without ref
- local: resolved absolute path
```

## Assistant

### Tool: read

**Args:**
```json
{
  "limit": 2000,
  "offset": 1535,
  "path": "/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/docs/extensions.md"
}
```

## Assistant

**Result** (success):
```
const result = await pi.exec("some-command", [], { signal });

    // Return result
    return {
      content: [{ type: "text", text: "Done" }],  // Sent to LLM
      details: { data: result },                   // For rendering & state
    };
  },

  // Optional: Custom rendering
  renderCall(args, theme, context) { ... },
  renderResult(result, options, theme, context) { ... },
});
```

**Signaling errors:** To mark a tool execution as failed (sets `isError: true` on the result and reports it to the LLM), throw an error from `execute`. Returning a value never sets the error flag regardless of what properties you include in the return object.

```typescript
// Correct: throw to signal an error
async execute(toolCallId, params) {
  if (!isValid(params.input)) {
    throw new Error(`Invalid input: ${params.input}`);
  }
  return { content: [{ type: "text", text: "OK" }], details: {} };
}
```

**Important:** Use `StringEnum` from `@mariozechner/pi-ai` for string enums. `Type.Union`/`Type.Literal` doesn't work with Google's API.

**Argument preparation:** `prepareArguments(args)` is optional. If defined, it runs before schema validation and before `execute()`. Use it to mimic an older accepted input shape when pi resumes an older session whose stored tool call arguments no longer match the current schema. Return the object you want validated against `parameters`. Keep the public schema strict. Do not add deprecated compatibility fields to `parameters` just to keep old resumed sessions working.

Example: an older session may contain an `edit` tool call with top-level `oldText` and `newText`, while the current schema only accepts `edits: [{ oldText, newText }]`.

```typescript
pi.registerTool({
  name: "edit",
  label: "Edit",
  description: "Edit a single file using exact text replacement",
  parameters: Type.Object({
    path: Type.String(),
    edits: Type.Array(
      Type.Object({
        oldText: Type.String(),
        newText: Type.String(),
      }),
    ),
  }),
  prepareArguments(args) {
    if (!args || typeof args !== "object") return args;

    const input = args as {
      path?: string;
      edits?: Array<{ oldText: string; newText: string }>;
      oldText?: unknown;
      newText?: unknown;
    };

    if (typeof input.oldText !== "string" || typeof input.newText !== "string") {
      return args;
    }

    return {
      ...input,
      edits: [...(input.edits ?? []), { oldText: input.oldText, newText: input.newText }],
    };
  },
  async execute(toolCallId, params, signal, onUpdate, ctx) {
    // params now matches the current schema
    return {
      content: [{ type: "text", text: `Applying ${params.edits.length} edit block(s)` }],
      details: {},
    };
  },
});
```

### Overriding Built-in Tools

Extensions can override built-in tools (`read`, `bash`, `edit`, `write`, `grep`, `find`, `ls`) by registering a tool with the same name. Interactive mode displays a warning when this happens.

```bash
# Extension's read tool replaces built-in read
pi -e ./tool-override.ts
```

Alternatively, use `--no-tools` to start without any built-in tools:
```bash
# No built-in tools, only extension tools
pi --no-tools -e ./my-extension.ts
```

See [examples/extensions/tool-override.ts](../examples/extensions/tool-override.ts) for a complete example that overrides `read` with logging and access control.

**Rendering:** Built-in renderer inheritance is resolved per slot. Execution override and rendering override are independent. If your override omits `renderCall`, the built-in `renderCall` is used. If your override omits `renderResult`, the built-in `renderResult` is used. If your override omits both, the built-in renderer is used automatically (syntax highlighting, diffs, etc.). This lets you wrap built-in tools for logging or access control without reimplementing the UI.

**Prompt metadata:** `promptSnippet` and `promptGuidelines` are not inherited from the built-in tool. If your override should keep those prompt instructions, define them on the override explicitly.

**Your implementation must match the exact result shape**, including the `details` type. The UI and session logic depend on these shapes for rendering and state tracking.

Built-in tool implementations:
- [read.ts](https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/src/core/tools/read.ts) - `ReadToolDetails`
- [bash.ts](https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/src/core/tools/bash.ts) - `BashToolDetails`
- [edit.ts](https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/src/core/tools/edit.ts)
- [write.ts](https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/src/core/tools/write.ts)
- [grep.ts](https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/src/core/tools/grep.ts) - `GrepToolDetails`
- [find.ts](https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/src/core/tools/find.ts) - `FindToolDetails`
- [ls.ts](https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/src/core/tools/ls.ts) - `LsToolDetails`

### Remote Execution

Built-in tools support pluggable operations for delegating to remote systems (SSH, containers, etc.):

```typescript
import { createReadTool, createBashTool, type ReadOperations } from "@mariozechner/pi-coding-agent";

// Create tool with custom operations
const remoteRead = createReadTool(cwd, {
  operations: {
    readFile: (path) => sshExec(remote, `cat ${path}`),
    access: (path) => sshExec(remote, `test -r ${path}`).then(() => {}),
  }
});

// Register, checking flag at execution time
pi.registerTool({
  ...remoteRead,
  async execute(id, params, signal, onUpdate, _ctx) {
    const ssh = getSshConfig();
    if (ssh) {
      const tool = createReadTool(cwd, { operations: createRemoteOps(ssh) });
      return tool.execute(id, params, signal, onUpdate);
    }
    return localRead.execute(id, params, signal, onUpdate);
  },
});
```

**Operations interfaces:** `ReadOperations`, `WriteOperations`, `EditOperations`, `BashOperations`, `LsOperations`, `GrepOperations`, `FindOperations`

For `user_bash`, extensions can reuse pi's local shell backend via `createLocalBashOperations()` instead of reimplementing local process spawning, shell resolution, and process-tree termination.

The bash tool also supports a spawn hook to adjust the command, cwd, or env before execution:

```typescript
import { createBashTool } from "@mariozechner/pi-coding-agent";

const bashTool = createBashTool(cwd, {
  spawnHook: ({ command, cwd, env }) => ({
    command: `source ~/.profile\n${command}`,
    cwd: `/mnt/sandbox${cwd}`,
    env: { ...env, CI: "1" },
  }),
});
```

See [examples/extensions/ssh.ts](../examples/extensions/ssh.ts) for a complete SSH example with `--ssh` flag.

### Output Truncation

**Tools MUST truncate their output** to avoid overwhelming the LLM context. Large outputs can cause:
- Context overflow errors (prompt too long)
- Compaction failures
- Degraded model performance

The built-in limit is **50KB** (~10k tokens) and **2000 lines**, whichever is hit first. Use the exported truncation utilities:

```typescript
import {
  truncateHead,      // Keep first N lines/bytes (good for file reads, search results)
  truncateTail,      // Keep last N lines/bytes (good for logs, command output)
  truncateLine,      // Truncate a single line to maxBytes with ellipsis
  formatSize,        // Human-readable size (e.g., "50KB", "1.5MB")
  DEFAULT_MAX_BYTES, // 50KB
  DEFAULT_MAX_LINES, // 2000
} from "@mariozechner/pi-coding-agent";

async execute(toolCallId, params, signal, onUpdate, ctx) {
  const output = await runCommand();

  // Apply truncation
  const truncation = truncateHead(output, {
    maxLines: DEFAULT_MAX_LINES,
    maxBytes: DEFAULT_MAX_BYTES,
  });

  let result = truncation.content;

  if (truncation.truncated) {
    // Write full output to temp file
    const tempFile = writeTempFile(output);

    // Inform the LLM where to find complete output
    result += `\n\n[Output truncated: ${truncation.outputLines} of ${truncation.totalLines} lines`;
    result += ` (${formatSize(truncation.outputBytes)} of ${formatSize(truncation.totalBytes)}).`;
    result += ` Full output saved to: ${tempFile}]`;
  }

  return { content: [{ type: "text", text: result }] };
}
```

**Key points:**
- Use `truncateHead` for content where the beginning matters (search results, file reads)
- Use `truncateTail` for content where the end matters (logs, command output)
- Always inform the LLM when output is truncated and where to find the full version
- Document the truncation limits in your tool's description

See [examples/extensions/truncated-tool.ts](../examples/extensions/truncated-tool.ts) for a complete example wrapping `rg` (ripgrep) with proper truncation.

### Multiple Tools

One extension can register multiple tools with shared state:

```typescript
export default function (pi: ExtensionAPI) {
  let connection = null;

  pi.registerTool({ name: "db_connect", ... });
  pi.registerTool({ name: "db_query", ... });
  pi.registerTool({ name: "db_close", ... });

  pi.on("session_shutdown", async () => {
    connection?.close();
  });
}
```

### Custom Rendering

Tools can provide `renderCall` and `renderResult` for custom TUI display. See [tui.md](tui.md) for the full component API and [tool-execution.ts](https://github.com/badlogic/pi-mono/blob/main/packages/coding-agent/src/modes/interactive/components/tool-execution.ts) for how tool rows are composed.

Tool output is wrapped in a `Box` that handles padding and background. A defined `renderCall` or `renderResult` must return a `Component`. If a slot renderer is not defined, `tool-execution.ts` uses fallback rendering for that slot.

`renderCall` and `renderResult` each receive a `context` object with:
- `args` - the current tool call arguments
- `state` - shared row-local state across `renderCall` and `renderResult`
- `lastComponent` - the previously returned component for that slot, if any
- `invalidate()` - request a rerender of this tool row
- `toolCallId`, `cwd`, `executionStarted`, `argsComplete`, `isPartial`, `expanded`, `showImages`, `isError`

Use `context.state` for cross-slot shared state. Keep slot-local caches on the returned component instance when you want to reuse and mutate the same component across renders.

#### renderCall

Renders the tool call or header:

```typescript
import { Text } from "@mariozechner/pi-tui";

renderCall(args, theme, context) {
  const text = (context.lastComponent as Text | undefined) ?? new Text("", 0, 0);
  let content = theme.fg("toolTitle", theme.bold("my_tool "));
  content += theme.fg("muted", args.action);
  if (args.text) {
    content += " " + theme.fg("dim", `"${args.text}"`);
  }
  text.setText(content);
  return text;
}
```

#### renderResult

Renders the tool result or output:

```typescript
renderResult(result, { expanded, isPartial }, theme, context) {
  if (isPartial) {
    return new Text(theme.fg("warning", "Processing..."), 0, 0);
  }

  if (result.details?.error) {
    return new Text(theme.fg("error", `Error: ${result.details.error}`), 0, 0);
  }

  let text = theme.fg("success", "✓ Done");
  if (expanded && result.details?.items) {
    for (const item of result.details.items) {
      text += "\n  " + theme.fg("dim", item);
    }
  }
  return new Text(text, 0, 0);
}
```

If a slot intentionally has no visible content, return an empty `Component` such as an empty `Container`.

#### Keybinding Hints

Use `keyHint()` to display keybinding hints that respect the active keybinding configuration:

```typescript
import { keyHint } from "@mariozechner/pi-coding-agent";

renderResult(result, { expanded }, theme, context) {
  let text = theme.fg("success", "✓ Done");
  if (!expanded) {
    text += ` (${keyHint("app.tools.expand", "to expand")})`;
  }
  return new Text(text, 0, 0);
}
```

Available functions:
- `keyHint(keybinding, description)` - Formats a configured keybinding id such as `"app.tools.expand"` or `"tui.select.confirm"`
- `keyText(keybinding)` - Returns the raw configured key text for a keybinding id
- `rawKeyHint(key, description)` - Format a raw key string

Use namespaced keybinding ids:
- Coding-agent ids use the `app.*` namespace, for example `app.tools.expand`, `app.editor.external`, `app.session.rename`
- Shared TUI ids use the `tui.*` namespace, for example `tui.select.confirm`, `tui.select.cancel`, `tui.input.tab`

For the exhaustive list of keybinding ids and defaults, see [keybindings.md](keybindings.md). `keybindings.json` uses those same namespaced ids.

Custom editors and `ctx.ui.custom()` components receive `keybindings: KeybindingsManager` as an injected argument. They should use that injected manager directly instead of calling `getKeybindings()` or `setKeybindings()`.

#### Best Practices

- Use `Text` with padding `(0, 0)`. The Box handles padding.
- Use `\n` for multi-line content.
- Handle `isPartial` for streaming progress.
- Support `expanded` for detail on demand.
- Keep default view compact.
- Read `context.args` in `renderResult` instead of copying args into `context.state`.
- Use `context.state` only for data that must be shared across call and result slots.
- Reuse `context.lastComponent` when the same component instance can be updated in place.

#### Fallback

If a slot renderer is not defined or throws:
- `renderCall`: Shows the tool name
- `renderResult`: Shows raw text from `content`

## Custom UI

Extensions can interact with users via `ctx.ui` methods and customize how messages/tools render.

**For custom components, see [tui.md](tui.md)** which has copy-paste patterns for:
- Selection dialogs (SelectList)
- Async operations with cancel (BorderedLoader)
- Settings toggles (SettingsList)
- Status indicators (setStatus)
- Working message during streaming (setWorkingMessage)
- Widgets above/below editor (setWidget)
- Custom footers (setFooter)

### Dialogs

```typescript
// Select from options
const choice = await ctx.ui.select("Pick one:", ["A", "B", "C"]);

// Confirm dialog
const ok = await ctx.ui.confirm("Delete?", "This cannot be undone");

// Text input
const name = await ctx.ui.input("Name:", "placeholder");

// Multi-line editor
const text = await ctx.ui.editor("Edit:", "prefilled text");

// Notification (non-blocking)
ctx.ui.notify("Done!", "info");  // "info" | "warning" | "error"
```

#### Timed Dialogs with Countdown

Dialogs support a `timeout` option that auto-dismisses with a live countdown display:

```typescript
// Dialog shows "Title (5s)" → "Title (4s)" → ... → auto-dismisses at 0
const confirmed = await ctx.ui.confirm(
  "Timed Confirmation",
  "This dialog will auto-cancel in 5 seconds. Confirm?",
  { timeout: 5000 }
);

if (confirmed) {
  // User confirmed
} else {
  // User cancelled or timed out
}
```

**Return values on timeout:**
- `select()` returns `undefined`
- `confirm()` returns `false`
- `input()` returns `undefined`

#### Manual Dismissal with AbortSignal

For more control (e.g., to distinguish timeout from user cancel), use `AbortSignal`:

```typescript
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 5000);

const confirmed = await ctx.ui.confirm(
  "Timed Confirmation",
  "This dialog will auto-cancel in 5 seconds. Confirm?",
  { signal: controller.signal }
);

clearTimeout(timeoutId);

if (confirmed) {
  // User confirmed
} else if (controller.signal.aborted) {
  // Dialog timed out
} else {
  // User cancelled (pressed Escape or selected "No")
}
```

See [examples/extensions/timed-confirm.ts](../examples/extensions/timed-confirm.ts) for complete examples.

### Widgets, Status, and Footer

```typescript
// Status in footer (persistent until cleared)
ctx.ui.setStatus("my-ext", "Processing...");
ctx.ui.setStatus("my-ext", undefined);  // Clear

// Working message (shown during streaming)
ctx.ui.setWorkingMessage("Thinking deeply...");
ctx.ui.setWorkingMessage();  // Restore default

// Widget above editor (default)
ctx.ui.setWidget("my-widget", ["Line 1", "Line 2"]);
// Widget below editor
ctx.ui.setWidget("my-widget", ["Line 1", "Line 2"], { placement: "belowEditor" });
ctx.ui.setWidget("my-widget", (tui, theme) => new Text(theme.fg("accent", "Custom"), 0, 0));
ctx.ui.setWidget("my-widget", undefined);  // Clear

// Custom footer (replaces built-in footer entirely)
ctx.ui.setFooter((tui, theme) => ({
  render(width) { return [theme.fg("dim", "Custom footer")]; },
  invalidate() {},
}));
ctx.ui.setFooter(undefined);  // Restore built-in footer

// Terminal title
ctx.ui.setTitle("pi - my-project");

// Editor text
ctx.ui.setEditorText("Prefill text");
const current = ctx.ui.getEditorText();

// Paste into editor (triggers paste handling, including collapse for large content)
ctx.ui.pasteToEditor("pasted content");

// Tool output expansion
const wasExpanded = ctx.ui.getToolsExpanded();
ctx.ui.setToolsExpanded(true);
ctx.ui.setToolsExpanded(wasExpanded);

// Custom editor (vim mode, emacs mode, etc.)
ctx.ui.setEditorComponent((tui, theme, keybindings) => new VimEditor(tui, theme, keybindings));
ctx.ui.setEditorComponent(undefined);  // Restore default editor

// Theme management (see themes.md for creating themes)
const themes = ctx.ui.getAllThemes();  // [{ name: "dark", path: "/..." | undefined }, ...]
const lightTheme = ctx.ui.getTheme("light");  // Load without switching
const result = ctx.ui.setTheme("light");  // Switch by name
if (!result.success) {
  ctx.ui.notify(`Failed: ${result.error}`, "error");
}
ctx.ui.setTheme(lightTheme!);  // Or switch by Theme object
ctx.ui.theme.fg("accent", "styled text");  // Access current theme
```

### Custom Components

For complex UI, use `ctx.ui.custom()`. This temporarily replaces the editor with your component until `done()` is called:

```typescript
import { Text, Component } from "@mariozechner/pi-tui";

const result = await ctx.ui.custom<boolean>((tui, theme, keybindings, done) => {
  const text = new Text("Press Enter to confirm, Escape to cancel", 1, 1);

  text.onKey = (key) => {
    if (key === "return") done(true);
    if (key === "escape") done(false);
    return true;
  };

  return text;
});

if (result) {
  // User pressed Enter
}
```

The callback receives:
- `tui` - TUI instance (for screen dimensions, focus management)
- `theme` - Current theme for styling
- `keybindings` - App keybinding manager (for checking shortcuts)
- `done(value)` - Call to close component and return value

See [tui.md](tui.md) for the full component API.

#### Overlay Mode (Experimental)

Pass `{ overlay: true }` to render the component as a floating modal on top of existing content, without clearing the screen:

```typescript
const result = await ctx.ui.custom<string | null>(
  (tui, theme, keybindings, done) => new MyOverlayComponent({ onClose: done }),
  { overlay: true }
);
```

For advanced positioning (anchors, margins, percentages, responsive visibility), pass `overlayOptions`. Use `onHandle` to control visibility programmatically:

```typescript
const result = await ctx.ui.custom<string | null>(
  (tui, theme, keybindings, done) => new MyOverlayComponent({ onClose: done }),
  {
    overlay: true,
    overlayOptions: { anchor: "top-right", width: "50%", margin: 2 },
    onHandle: (handle) => { /* handle.setHidden(true/false) */ }
  }
);
```

See [tui.md](tui.md) for the full `OverlayOptions` API and [overlay-qa-tests.ts](../examples/extensions/overlay-qa-tests.ts) for examples.

### Custom Editor

Replace the main input editor with a custom implementation (vim mode, emacs mode, etc.):

```typescript
import { CustomEditor, type ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { matchesKey } from "@mariozechner/pi-tui";

class VimEditor extends CustomEditor {
  private mode: "normal" | "insert" = "insert";

  handleInput(data: string): void {
    if (matchesKey(data, "escape") && this.mode === "insert") {
      this.mode = "normal";
      return;
    }
    if (this.mode === "normal" && data === "i") {
      this.mode = "insert";
      return;
    }
    super.handleInput(data);  // App keybindings + text editing
  }
}

export default function (pi: ExtensionAPI) {
  pi.on("session_start", (_event, ctx) => {
    ctx.ui.setEditorComponent((_tui, theme, keybindings) =>
      new VimEditor(theme, keybindings)
    );
  });
}
```

**Key points:**
- Extend `CustomEditor` (not base `Editor`) to get app keybindings (escape to abort, ctrl+d, model switching)
- Call `super.handleInput(data)` for keys you don't handle
- Factory receives `theme` and `keybindings` from the app
- Pass `undefined` to restore default: `ctx.ui.setEditorComponent(undefined)`

See [tui.md](tui.md) Pattern 7 for a complete example with mode indicator.

### Message Rendering

Register a custom renderer for messages with your `customType`:

```typescript
import { Text } from "@mariozechner/pi-tui";

pi.registerMessageRenderer("my-extension", (message, options, theme) => {
  const { expanded } = options;
  let text = theme.fg("accent", `[${message.customType}] `);
  text += message.content;

  if (expanded && message.details) {
    text += "\n" + theme.fg("dim", JSON.stringify(message.details, null, 2));
  }

  return new Text(text, 0, 0);
});
```

Messages are sent via `pi.sendMessage()`:

```typescript
pi.sendMessage({
  customType: "my-extension",  // Matches registerMessageRenderer
  content: "Status update",
  display: true,               // Show in TUI
  details: { ... },            // Available in renderer
});
```

### Theme Colors

All render functions receive a `theme` object. See [themes.md](themes.md) for creating custom themes and the full color palette.

```typescript
// Foreground colors
theme.fg("toolTitle", text)   // Tool names
theme.fg("accent", text)      // Highlights
theme.fg("success", text)     // Success (green)
theme.fg("error", text)       // Errors (red)
theme.fg("warning", text)     // Warnings (yellow)
theme.fg("muted", text)       // Secondary text
theme.fg("dim", text)         // Tertiary text

// Text styles
theme.bold(text)
theme.italic(text)
theme.strikethrough(text)
```

For syntax highlighting in custom tool renderers:

```typescript
import { highlightCode, getLanguageFromPath } from "@mariozechner/pi-coding-agent";

// Highlight code with explicit language
const highlighted = highlightCode("const x = 1;", "typescript", theme);

// Auto-detect language from file path
const lang = getLanguageFromPath("/path/to/file.rs");  // "rust"
const highlighted = highlightCode(code, lang, theme);
```

## Error Handling

- Extension errors are logged, agent continues
- `tool_call` errors block the tool (fail-safe)
- Tool `execute` errors must be signaled by throwing; the thrown error is caught, reported to the LLM with `isError: true`, and execution continues

## Mode Behavior

| Mode | UI Methods | Notes |
|------|-----------|-------|
| Interactive | Full TUI | Normal operation |
| RPC (`--mode rpc`) | JSON protocol | Host handles UI, see [rpc.md](rpc.md) |
| JSON (`--mode json`) | No-op | Event stream to stdout, see [json.md](json.md) |
| Print (`-p`) | No-op | Extensions run but can't prompt |

In non-interactive modes, check `ctx.hasUI` before using UI methods.

## Examples Reference

All examples in [examples/extensions/](../examples/extensions/).

| Example | Description | Key APIs |
|---------|-------------|----------|
| **Tools** |||
| `hello.ts` | Minimal tool registration | `registerTool` |
| `question.ts` | Tool with user interaction | `registerTool`, `ui.select` |
| `questionnaire.ts` | Multi-step wizard tool | `registerTool`, `ui.custom` |
| `todo.ts` | Stateful tool with persistence | `registerTool`, `appendEntry`, `renderResult`, session events |
| `dynamic-tools.ts` | Register tools after startup and during commands | `registerTool`, `session_start`, `registerCommand` |
| `truncated-tool.ts` | Output truncation example | `registerTool`, `truncateHead` |
| `tool-override.ts` | Override built-in read tool | `registerTool` (same name as built-in) |
| **Commands** |||
| `pirate.ts` | Modify system prompt per-turn | `registerCommand`, `before_agent_start` |
| `summarize.ts` | Conversation summary command | `registerCommand`, `ui.custom` |
| `handoff.ts` | Cross-provider model handoff | `registerCommand`, `ui.editor`, `ui.custom` |
| `qna.ts` | Q&A with custom UI | `registerCommand`, `ui.custom`, `setEditorText` |
| `send-user-message.ts` | Inject user messages | `registerCommand`, `sendUserMessage` |
| `reload-runtime.ts` | Reload command and LLM tool handoff | `registerCommand`, `ctx.reload()`, `sendUserMessage` |
| `shutdown-command.ts` | Graceful shutdown command | `registerCommand`, `shutdown()` |
| **Events & Gates** |||
| `permission-gate.ts` | Block dangerous commands | `on("tool_call")`, `ui.confirm` |
| `protected-paths.ts` | Block writes to specific paths | `on("tool_call")` |
| `confirm-destructive.ts` | Confirm session changes | `on("session_before_switch")`, `on("session_before_fork")` |
| `dirty-repo-guard.ts` | Warn on dirty git repo | `on("session_before_*")`, `exec` |
| `input-transform.ts` | Transform user input | `on("input")` |
| `model-status.ts` | React to model changes | `on("model_select")`, `setStatus` |
| `provider-payload.ts` | Inspect or patch provider payloads | `on("before_provider_request")` |
| `system-prompt-header.ts` | Display system prompt info | `on("agent_start")`, `getSystemPrompt` |
| `claude-rules.ts` | Load rules from files | `on("session_start")`, `on("before_agent_start")` |
| `file-trigger.ts` | File watcher triggers messages | `sendMessage` |
| **Compaction & Sessions** |||
| `custom-compaction.ts` | Custom compaction summary | `on("session_before_compact")` |
| `trigger-compact.ts` | Trigger compaction manually | `compact()` |
| `git-checkpoint.ts` | Git stash on turns | `on("turn_start")`, `on("session_before_fork")`, `exec` |
| `auto-commit-on-exit.ts` | Commit on shutdown | `on("session_shutdown")`, `exec` |
| **UI Components** |||
| `status-line.ts` | Footer status indicator | `setStatus`, session events |
| `custom-footer.ts` | Replace footer entirely | `registerCommand`, `setFooter` |
| `custom-header.ts` | Replace startup header | `on("session_start")`, `setHeader` |
| `modal-editor.ts` | Vim-style modal editor | `setEditorComponent`, `CustomEditor` |
| `rainbow-editor.ts` | Custom editor styling | `setEditorComponent` |
| `widget-placement.ts` | Widget above/below editor | `setWidget` |
| `overlay-test.ts` | Overlay components | `ui.custom` with overlay options |
| `overlay-qa-tests.ts` | Comprehensive overlay tests | `ui.custom`, all overlay options |
| `notify.ts` | Simple notifications | `ui.notify` |
| `timed-confirm.ts` | Dialogs with timeout | `ui.confirm` with timeout/signal |
| `mac-system-theme.ts` | Auto-switch theme | `setTheme`, `exec` |
| **Complex Extensions** |||
| `plan-mode/` | Full plan mode implementation | All event types, `registerCommand`, `registerShortcut`, `registerFlag`, `setStatus`, `setWidget`, `sendMessage`, `setActiveTools` |
| `preset.ts` | Saveable presets (model, tools, thinking) | `registerCommand`, `registerShortcut`, `registerFlag`, `setModel`, `setActiveTools`, `setThinkingLevel`, `appendEntry` |
| `tools.ts` | Toggle tools on/off UI | `registerCommand`, `setActiveTools`, `SettingsList`, session events |
| **Remote & Sandbox** |||
| `ssh.ts` | SSH remote execution | `registerFlag`, `on("user_bash")`, `on("before_agent_start")`, tool operations |
| `interactive-shell.ts` | Persistent shell session | `on("user_bash")` |
| `sandbox/` | Sandboxed tool execution | Tool operations |
| `subagent/` | Spawn sub-agents | `registerTool`, `exec` |
| **Games** |||
| `snake.ts` | Snake game | `registerCommand`, `ui.custom`, keyboard handling |
| `space-invaders.ts` | Space Invaders game | `registerCommand`, `ui.custom` |
| `doom-overlay/` | Doom in overlay | `ui.custom` with overlay |
| **Providers** |||
| `custom-provider-anthropic/` | Custom Anthropic proxy | `registerProvider` |
| `custom-provider-gitlab-duo/` | GitLab Duo integration | `registerProvider` with OAuth |
| **Messages & Communication** |||
| `message-renderer.ts` | Custom message rendering | `registerMessageRenderer`, `sendMessage` |
| `event-bus.ts` | Inter-extension events | `pi.events` |
| **Session Metadata** |||
| `session-name.ts` | Name sessions for selector | `setSessionName`, `getSessionName` |
| `bookmark.ts` | Bookmark entries for /tree | `setLabel` |
| **Misc** |||
| `antigravity-image-gen.ts` | Image generation tool | `registerTool`, Google Antigravity |
| `inline-bash.ts` | Inline bash in tool calls | `on("tool_call")` |
| `bash-spawn-hook.ts` | Adjust bash command, cwd, and env before execution | `createBashTool`, `spawnHook` |
| `with-deps/` | Extension with npm dependencies | Package structure with `package.json` |
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/opt/homebrew/Cellar/pi-coding-agent/0.65.2/libexec/lib/node_modules/@mariozechner/pi-coding-agent/examples/extensions/README.md"
}
```

## Assistant

**Result** (success):
```
# Extension Examples

Example extensions for pi-coding-agent.

## Usage

```bash
# Load an extension with --extension flag
pi --extension examples/extensions/permission-gate.ts

# Or copy to extensions directory for auto-discovery
cp permission-gate.ts ~/.pi/agent/extensions/
```

## Examples

### Lifecycle & Safety

| Extension | Description |
|-----------|-------------|
| `permission-gate.ts` | Prompts for confirmation before dangerous bash commands (rm -rf, sudo, etc.) |
| `protected-paths.ts` | Blocks writes to protected paths (.env, .git/, node_modules/) |
| `confirm-destructive.ts` | Confirms before destructive session actions (clear, switch, fork) |
| `dirty-repo-guard.ts` | Prevents session changes with uncommitted git changes |
| `sandbox/` | OS-level sandboxing using `@anthropic-ai/sandbox-runtime` with per-project config |

### Custom Tools

| Extension | Description |
|-----------|-------------|
| `todo.ts` | Todo list tool + `/todos` command with custom rendering and state persistence |
| `hello.ts` | Minimal custom tool example |
| `question.ts` | Demonstrates `ctx.ui.select()` for asking the user questions with custom UI |
| `questionnaire.ts` | Multi-question input with tab bar navigation between questions |
| `tool-override.ts` | Override built-in tools (e.g., add logging/access control to `read`) |
| `dynamic-tools.ts` | Register tools after startup (`session_start`) and at runtime via command, with prompt snippets and tool-specific prompt guidelines |
| `built-in-tool-renderer.ts` | Custom compact rendering for built-in tools (read, bash, edit, write) while keeping original behavior |
| `minimal-mode.ts` | Override built-in tool rendering for minimal display (only tool calls, no output in collapsed mode) |
| `truncated-tool.ts` | Wraps ripgrep with proper output truncation (50KB/2000 lines) |
| `antigravity-image-gen.ts` | Generate images via Google Antigravity with optional save-to-disk modes |
| `ssh.ts` | Delegate all tools to a remote machine via SSH using pluggable operations |
| `subagent/` | Delegate tasks to specialized subagents with isolated context windows |

### Commands & UI

| Extension | Description |
|-----------|-------------|
| `preset.ts` | Named presets for model, thinking level, tools, and instructions via `--preset` flag and `/preset` command |
| `plan-mode/` | Claude Code-style plan mode for read-only exploration with `/plan` command and step tracking |
| `tools.ts` | Interactive `/tools` command to enable/disable tools with session persistence |
| `handoff.ts` | Transfer context to a new focused session via `/handoff <goal>` |
| `qna.ts` | Extracts questions from last response into editor via `ctx.ui.setEditorText()` |
| `status-line.ts` | Shows turn progress in footer via `ctx.ui.setStatus()` with themed colors |
| `widget-placement.ts` | Shows widgets above and below the editor via `ctx.ui.setWidget()` placement |
| `hidden-thinking-label.ts` | Customizes the collapsed thinking label via `ctx.ui.setHiddenThinkingLabel()` |
| `model-status.ts` | Shows model changes in status bar via `model_select` hook |
| `snake.ts` | Snake game with custom UI, keyboard handling, and session persistence |
| `send-user-message.ts` | Demonstrates `pi.sendUserMessage()` for sending user messages from extensions |
| `timed-confirm.ts` | Demonstrates AbortSignal for auto-dismissing `ctx.ui.confirm()` and `ctx.ui.select()` dialogs |
| `rpc-demo.ts` | Exercises all RPC-supported extension UI methods; pair with [`examples/rpc-extension-ui.ts`](../rpc-extension-ui.ts) |
| `modal-editor.ts` | Custom vim-like modal editor via `ctx.ui.setEditorComponent()` |
| `rainbow-editor.ts` | Animated rainbow text effect via custom editor |
| `notify.ts` | Desktop notifications via OSC 777 when agent finishes (Ghostty, iTerm2, WezTerm) |
| `titlebar-spinner.ts` | Braille spinner animation in terminal title while the agent is working |
| `summarize.ts` | Summarize conversation with GPT-5.2 and show in transient UI |
| `custom-footer.ts` | Custom footer with git branch and token stats via `ctx.ui.setFooter()` |
| `custom-header.ts` | Custom header via `ctx.ui.setHeader()` |
| `overlay-test.ts` | Test overlay compositing with inline text inputs and edge cases |
| `overlay-qa-tests.ts` | Comprehensive overlay QA tests: anchors, margins, stacking, overflow, animation |
| `doom-overlay/` | DOOM game running as an overlay at 35 FPS (demonstrates real-time game rendering) |
| `shutdown-command.ts` | Adds `/quit` command demonstrating `ctx.shutdown()` |
| `reload-runtime.ts` | Adds `/reload-runtime` and `reload_runtime` tool showing safe reload flow |
| `interactive-shell.ts` | Run interactive commands (vim, htop) with full terminal via `user_bash` hook |
| `inline-bash.ts` | Expands `!{command}` patterns in prompts via `input` event transformation |

### Git Integration

| Extension | Description |
|-----------|-------------|
| `git-checkpoint.ts` | Creates git stash checkpoints at each turn for code restoration on fork |
| `auto-commit-on-exit.ts` | Auto-commits on exit using last assistant message for commit message |

### System Prompt & Compaction

| Extension | Description |
|-----------|-------------|
| `pirate.ts` | Demonstrates `systemPromptAppend` to dynamically modify system prompt |
| `claude-rules.ts` | Scans `.claude/rules/` folder and lists rules in system prompt |
| `custom-compaction.ts` | Custom compaction that summarizes entire conversation |
| `trigger-compact.ts` | Triggers compaction when context usage exceeds 100k tokens and adds `/trigger-compact` command |

### System Integration

| Extension | Description |
|-----------|-------------|
| `mac-system-theme.ts` | Syncs pi theme with macOS dark/light mode |

### Resources

| Extension | Description |
|-----------|-------------|
| `dynamic-resources/` | Loads skills, prompts, and themes using `resources_discover` |

### Messages & Communication

| Extension | Description |
|-----------|-------------|
| `message-renderer.ts` | Custom message rendering with colors and expandable details via `registerMessageRenderer` |
| `event-bus.ts` | Inter-extension communication via `pi.events` |

### Session Metadata

| Extension | Description |
|-----------|-------------|
| `session-name.ts` | Name sessions for the session selector via `setSessionName` |
| `bookmark.ts` | Bookmark entries with labels for `/tree` navigation via `setLabel` |

### Custom Providers

| Extension | Description |
|-----------|-------------|
| `custom-provider-anthropic/` | Custom Anthropic provider with OAuth support and custom streaming implementation |
| `custom-provider-gitlab-duo/` | GitLab Duo provider using pi-ai's built-in Anthropic/OpenAI streaming via proxy |
| `custom-provider-qwen-cli/` | Qwen CLI provider with OAuth device flow and OpenAI-compatible models |

### External Dependencies

| Extension | Description |
|-----------|-------------|
| `with-deps/` | Extension with its own package.json and dependencies (demonstrates jiti module resolution) |
| `file-trigger.ts` | Watches a trigger file and injects contents into conversation |

## Writing Extensions

See [docs/extensions.md](../../docs/extensions.md) for full documentation.

```typescript
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { Type } from "@sinclair/typebox";

export default function (pi: ExtensionAPI) {
  // Subscribe to lifecycle events
  pi.on("tool_call", async (event, ctx) => {
    if (event.toolName === "bash" && event.input.command?.includes("rm -rf")) {
      const ok = await ctx.ui.confirm("Dangerous!", "Allow rm -rf?");
      if (!ok) return { block: true, reason: "Blocked by user" };
    }
  });

  // Register custom tools
  pi.registerTool({
    name: "greet",
    label: "Greeting",
    description: "Generate a greeting",
    parameters: Type.Object({
      name: Type.String({ description: "Name to greet" }),
    }),
    async execute(toolCallId, params, onUpdate, ctx, signal) {
      return {
        content: [{ type: "text", text: `Hello, ${params.name}!` }],
        details: {},
      };
    },
  });

  // Register commands
  pi.registerCommand("hello", {
    description: "Say hello",
    handler: async (args, ctx) => {
      ctx.ui.notify("Hello!", "info");
    },
  });
}
```

## Key Patterns

**Use StringEnum for string parameters** (required for Google API compatibility):
```typescript
import { StringEnum } from "@mariozechner/pi-ai";

// Good
action: StringEnum(["list", "add"] as const)

// Bad - doesn't work with Google
action: Type.Union([Type.Literal("list"), Type.Literal("add")])
```

**State persistence via details:**
```typescript
// Store state in tool result details for proper forking support
return {
  content: [{ type: "text", text: "Done" }],
  details: { todos: [...todos], nextId },  // Persisted in session
};

// Reconstruct on session events
pi.on("session_start", async (_event, ctx) => {
  for (const entry of ctx.sessionManager.getBranch()) {
    if (entry.type === "message" && entry.message.toolName === "my_tool") {
      const details = entry.message.details;
      // Reconstruct state from details
    }
  }
});
```
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev && pnpm dlx gh-axi repo create pi-manim-video --public --description \"Pi extension for Manim video production workflow\" --clone",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
created: ok
repo: pi-manim-video
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "ls -la ~/dev/pi-manim-video && git -C ~/dev/pi-manim-video remote -v"
}
```

## Assistant

**Result** (success):
```
total 0
drwxr-xr-x@  3 mukund  staff    96 Apr 11 02:42 .
drwxr-xr-x@ 61 mukund  staff  1952 Apr 11 02:42 ..
drwxr-xr-x@  9 mukund  staff   288 Apr 11 02:42 .git
origin	[REDACTED]:maujim/pi-manim-video.git (fetch)
origin	[REDACTED]:maujim/pi-manim-video.git (push)
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && mkdir -p extensions/manim-video skills/manim-video && cp -R /Users/mukund/src/tries/2026-03-28-hermes-agent/skills/creative/manim-video/* skills/manim-video/"
}
```

## Assistant

**Result** (success):
```
(no output)
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/scripts/setup.sh"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "limit": 120,
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/README.md"
}
```

## Assistant

**Result** (success):
```
#!/usr/bin/env bash
set -euo pipefail
G="\033[0;32m"; R="\033[0;31m"; N="\033[0m"
ok() { echo -e "  ${G}+${N} $1"; }
fail() { echo -e "  ${R}x${N} $1"; }
echo ""; echo "Manim Video Skill — Setup Check"; echo ""
errors=0
command -v python3 &>/dev/null && ok "Python $(python3 --version 2>&1 | awk '{print $2}')" || { fail "Python 3 not found"; errors=$((errors+1)); }
python3 -c "import manim" 2>/dev/null && ok "Manim $(manim --version 2>&1 | head -1)" || { fail "Manim not installed: pip install manim"; errors=$((errors+1)); }
command -v pdflatex &>/dev/null && ok "LaTeX (pdflatex)" || { fail "LaTeX not found (macOS: brew install --cask mactex-no-gui)"; errors=$((errors+1)); }
command -v ffmpeg &>/dev/null && ok "ffmpeg" || { fail "ffmpeg not found"; errors=$((errors+1)); }
echo ""
[ $errors -eq 0 ] && echo -e "${G}All prerequisites satisfied.${N}" || echo -e "${R}$errors prerequisite(s) missing.${N}"
echo ""
```

## Assistant

**Result** (success):
```
---
name: manim-video
description: "Production pipeline for mathematical and technical animations using Manim Community Edition. Creates 3Blue1Brown-style explainer videos, algorithm visualizations, equation derivations, architecture diagrams, and data stories. Use when users request: animated explanations, math animations, concept visualizations, algorithm walkthroughs, technical explainers, 3Blue1Brown style videos, or any programmatic animation with geometric/mathematical content."
version: 1.0.0
---

# Manim Video Production Pipeline

## Creative Standard

This is educational cinema. Every frame teaches. Every animation reveals structure.

**Before writing a single line of code**, articulate the narrative arc. What misconception does this correct? What is the "aha moment"? What visual story takes the viewer from confusion to understanding? The user's prompt is a starting point — interpret it with pedagogical ambition.

**Geometry before algebra.** Show the shape first, the equation second. Visual memory encodes faster than symbolic memory. When the viewer sees the geometric pattern before the formula, the equation feels earned.

**First-render excellence is non-negotiable.** The output must be visually clear and aesthetically cohesive without revision rounds. If something looks cluttered, poorly timed, or like "AI-generated slides," it is wrong.

**Opacity layering directs attention.** Never show everything at full brightness. Primary elements at 1.0, contextual elements at 0.4, structural elements (axes, grids) at 0.15. The brain processes visual salience in layers.

**Breathing room.** Every animation needs `self.wait()` after it. The viewer needs time to absorb what just appeared. Never rush from one animation to the next. A 2-second pause after a key reveal is never wasted.

**Cohesive visual language.** All scenes share a color palette, consistent typography sizing, matching animation speeds. A technically correct video where every scene uses random different colors is an aesthetic failure.

## Prerequisites

Run `scripts/setup.sh` to verify all dependencies. Requires: Python 3.10+, Manim Community Edition v0.20+ (`pip install manim`), LaTeX (`texlive-full` on Linux, `mactex` on macOS), and ffmpeg. Reference docs tested against Manim CE v0.20.1.

## Modes

| Mode | Input | Output | Reference |
|------|-------|--------|-----------|
| **Concept explainer** | Topic/concept | Animated explanation with geometric intuition | `references/scene-planning.md` |
| **Equation derivation** | Math expressions | Step-by-step animated proof | `references/equations.md` |
| **Algorithm visualization** | Algorithm description | Step-by-step execution with data structures | `references/graphs-and-data.md` |
| **Data story** | Data/metrics | Animated charts, comparisons, counters | `references/graphs-and-data.md` |
| **Architecture diagram** | System description | Components building up with connections | `references/mobjects.md` |
| **Paper explainer** | Research paper | Key findings and methods animated | `references/scene-planning.md` |
| **3D visualization** | 3D concept | Rotating surfaces, parametric curves, spatial geometry | `references/camera-and-3d.md` |

## Stack

Single Python script per project. No browser, no Node.js, no GPU required.

| Layer | Tool | Purpose |
|-------|------|---------|
| Core | Manim Community Edition | Scene rendering, animation engine |
| Math | LaTeX (texlive/MiKTeX) | Equation rendering via `MathTex` |
| Video I/O | ffmpeg | Scene stitching, format conversion, audio muxing |
| TTS | ElevenLabs / Qwen3-TTS (optional) | Narration voiceover |

## Pipeline

```
PLAN --> CODE --> RENDER --> STITCH --> AUDIO (optional) --> REVIEW
```

1. **PLAN** — Write `plan.md` with narrative arc, scene list, visual elements, color palette, voiceover script
2. **CODE** — Write `script.py` with one class per scene, each independently renderable
3. **RENDER** — `manim -ql script.py Scene1 Scene2 ...` for draft, `-qh` for production
4. **STITCH** — ffmpeg concat of scene clips into `final.mp4`
5. **AUDIO** (optional) — Add voiceover and/or background music via ffmpeg. See `references/rendering.md`
6. **REVIEW** — Render preview stills, verify against plan, adjust

## Project Structure

```
project-name/
  plan.md                # Narrative arc, scene breakdown
  script.py              # All scenes in one file
  concat.txt             # ffmpeg scene list
  final.mp4              # Stitched output
  media/                 # Auto-generated by Manim
    videos/script/480p15/
```

## Creative Direction

### Color Palettes

| Palette | Background | Primary | Secondary | Accent | Use case |
|---------|-----------|---------|-----------|--------|----------|
| **Classic 3B1B** | `#1C1C1C` | `#58C4DD` (BLUE) | `#83C167` (GREEN) | `#FFFF00` (YELLOW) | General math/CS |
| **Warm academic** | `#2D2B55` | `#FF6B6B` | `#FFD93D` | `#6BCB77` | Approachable |
| **Neon tech** | `#0A0A0A` | `#00F5FF` | `#FF00FF` | `#39FF14` | Systems, architecture |
| **Monochrome** | `#1A1A2E` | `#EAEAEA` | `#888888` | `#FFFFFF` | Minimalist |

### Animation Speed

| Context | run_time | self.wait() after |
|---------|----------|-------------------|
| Title/intro appear | 1.5s | 1.0s |
| Key equation reveal | 2.0s | 2.0s |
| Transform/morph | 1.5s | 1.5s |
| Supporting label | 0.8s | 0.5s |
| FadeOut cleanup | 0.5s | 0.3s |
| "Aha moment" reveal | 2.5s | 3.0s |

### Typography Scale

| Role | Font size | Usage |
|------|-----------|-------|
| Title | 48 | Scene titles, opening text |
| Heading | 36 | Section headers within a scene |
| Body | 30 | Explanatory text |
| Label | 24 | Annotations, axis labels |
| Caption | 20 | Subtitles, fine print |

### Fonts

**Use monospace fonts for all text.** Manim's Pango renderer produces broken kerning with proportional fonts at all sizes. See `references/visual-design.md` for full recommendations.

```python
MONO = "Menlo"  # define once at top of file

Text("Fourier Series", font_size=48, font=MONO, weight=BOLD)  # titles
Text("n=1: sin(x)", font_size=20, font=MONO)                  # labels
MathTex(r"\nabla L")                                            # math (uses LaTeX)
```


[145 more lines in file. Use offset=121 to continue.]
```

## Assistant

**Result** (success):
```
# Manim Video Skill

Production pipeline for mathematical and technical animations using [Manim Community Edition](https://www.manim.community/).

## What it does

Creates 3Blue1Brown-style animated videos from text prompts. The agent handles the full pipeline: creative planning, Python code generation, rendering, scene stitching, and iterative refinement.

## Use cases

- **Concept explainers** — "Explain how neural networks learn"
- **Equation derivations** — "Animate the proof of the Pythagorean theorem"
- **Algorithm visualizations** — "Show how quicksort works step by step"
- **Data stories** — "Animate our before/after performance metrics"
- **Architecture diagrams** — "Show our microservice architecture building up"

## Prerequisites

Python 3.10+, Manim CE (`pip install manim`), LaTeX, ffmpeg.

```bash
bash skills/creative/manim-video/scripts/setup.sh
```
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "uv --version && uv run --no-project python --version && uv run --no-project manim --version | head -n 1",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
uv 0.11.3 (Homebrew 2026-04-01 aarch64-apple-darwin)
Python 3.13.5
error: Failed to spawn: `manim`
  Caused by: No such file or directory (os error 2)
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && rg -n \"skills/creative/manim-video|pip install manim|scripts/setup.sh|pip install\" skills/manim-video -S"
}
```

## Assistant

**Result** (success):
```
skills/manim-video/SKILL.md:27:Run `scripts/setup.sh` to verify all dependencies. Requires: Python 3.10+, Manim Community Edition v0.20+ (`pip install manim`), LaTeX (`texlive-full` on Linux, `mactex` on macOS), and ffmpeg. Reference docs tested against Manim CE v0.20.1.
skills/manim-video/README.md:19:Python 3.10+, Manim CE (`pip install manim`), LaTeX, ffmpeg.
skills/manim-video/README.md:22:bash skills/creative/manim-video/scripts/setup.sh
skills/manim-video/scripts/setup.sh:9:python3 -c "import manim" 2>/dev/null && ok "Manim $(manim --version 2>&1 | head -1)" || { fail "Manim not installed: pip install manim"; errors=$((errors+1)); }
skills/manim-video/references/rendering.md:141:pip install "manim-voiceover[elevenlabs]"
skills/manim-video/references/rendering.md:143:pip install "manim-voiceover[gtts]"    # Google TTS (free, lower quality)
skills/manim-video/references/rendering.md:144:pip install "manim-voiceover[azure]"   # Azure Cognitive Services
```

## Assistant

### Tool: write

**Args:**
```json
{
  "content": "{\n  \"name\": \"pi-manim-video\",\n  \"version\": \"0.1.0\",\n  \"description\": \"Pi extension + skill bundle for Manim video production workflows\",\n  \"keywords\": [\n    \"pi-package\"\n  ],\n  \"license\": \"MIT\",\n  \"pi\": {\n    \"extensions\": [\n      \"./extensions/manim-video/index.ts\"\n    ]\n  },\n  \"peerDependencies\": {\n    \"@mariozechner/pi-coding-agent\": \"*\"\n  }\n}\n",
  "path": "/Users/mukund/dev/pi-manim-video/package.json"
}
```

## Assistant

### Tool: write

**Args:**
```json
{
  "content": "import { dirname, join } from \"node:path\";\nimport { fileURLToPath } from \"node:url\";\nimport type { ExtensionAPI } from \"@mariozechner/pi-coding-agent\";\n\nconst baseDir = dirname(fileURLToPath(import.meta.url));\nconst skillPath = join(baseDir, \"..\", \"..\", \"skills\", \"manim-video\", \"SKILL.md\");\n\nexport default function (pi: ExtensionAPI) {\n  pi.on(\"resources_discover\", () => {\n    return {\n      skillPaths: [skillPath],\n    };\n  });\n\n  pi.registerCommand(\"manim\", {\n    description: \"Run the manim-video skill for animation planning/code/render workflows\",\n    handler: async (args, ctx) => {\n      const prompt = (args ?? \"\").trim();\n      const command = prompt.length > 0 ? `/skill:manim-video ${prompt}` : \"/skill:manim-video\";\n      pi.sendUserMessage(command);\n\n      if (ctx.hasUI) {\n        ctx.ui.notify(\"Queued manim-video skill.\", \"info\");\n      }\n    },\n  });\n}\n",
  "path": "/Users/mukund/dev/pi-manim-video/extensions/manim-video/index.ts"
}
```

## Assistant

### Tool: write

**Args:**
```json
{
  "content": "# pi-manim-video\n\nPi extension that ports the full `manim-video` skill (planning guides, references, and scripts) and adds a shortcut command: `/manim`.\n\n## Install\n\n```bash\npi install git:[REDACTED]:maujim/pi-manim-video\n```\n\n## Update\n\n```bash\npi update git:[REDACTED]:maujim/pi-manim-video\n```\n\n## Usage\n\nUse either:\n\n- `/manim <your request>`\n- `/skill:manim-video <your request>`\n\nExamples:\n\n- `/manim Explain gradient descent visually with geometry-first intuition`\n- `/manim Create a 4-scene algorithm animation for quicksort`\n\n## Included content\n\nThis package ports everything from the original skill directory:\n\n- `skills/manim-video/SKILL.md`\n- `skills/manim-video/README.md`\n- `skills/manim-video/references/*`\n- `skills/manim-video/scripts/setup.sh` (lightweight prerequisite checker)\n\n## Notes\n\n- The setup script only checks prerequisites and prints manual install guidance.\n- It does **not** install dependencies automatically.\n",
  "path": "/Users/mukund/dev/pi-manim-video/README.md"
}
```

## Assistant

**Result** (success):
```
Successfully wrote 349 bytes to /Users/mukund/dev/pi-manim-video/package.json
```

## Assistant

**Result** (success):
```
Successfully wrote 874 bytes to /Users/mukund/dev/pi-manim-video/extensions/manim-video/index.ts
```

## Assistant

**Result** (success):
```
Successfully wrote 952 bytes to /Users/mukund/dev/pi-manim-video/README.md
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "#!/usr/bin/env bash\nset -euo pipefail\n\nG=\"\\033[0;32m\"; R=\"\\033[0;31m\"; Y=\"\\033[1;33m\"; N=\"\\033[0m\"\nok() { echo -e \"  ${G}+${N} $1\"; }\nwarn() { echo -e \"  ${Y}!${N} $1\"; }\nfail() { echo -e \"  ${R}x${N} $1\"; }\n\necho \"\"\necho \"Manim Video Skill — Lightweight Prerequisite Check\"\necho \"\"\n\nerrors=0\n\nif command -v uv >/dev/null 2>&1; then\n  ok \"uv $(uv --version 2>/dev/null | awk '{print $2}')\"\nelse\n  fail \"uv not found\"\n  warn \"Install manually: https://docs.astral.sh/uv/getting-started/installation/\"\n  errors=$((errors + 1))\nfi\n\nif uv run --no-project python --version >/tmp/pi_manim_python_check.txt 2>&1; then\n  ok \"$(cat /tmp/pi_manim_python_check.txt | head -n 1)\"\nelse\n  fail \"Python via uv is unavailable\"\n  warn \"Install Python 3.10+ and make sure 'uv run --no-project python --version' succeeds\"\n  errors=$((errors + 1))\nfi\n\nif uv run --no-project manim --version >/tmp/pi_manim_version_check.txt 2>&1; then\n  ok \"Manim $(cat /tmp/pi_manim_version_check.txt | head -n 1)\"\nelse\n  fail \"Manim executable not found via uv\"\n  warn \"Install manually (pick one):\"\n  warn \"  - uv tool install manim\"\n  warn \"  - uv pip install manim\"\n  errors=$((errors + 1))\nfi\n\nif command -v pdflatex >/dev/null 2>&1; then\n  ok \"LaTeX (pdflatex)\"\nelse\n  fail \"LaTeX (pdflatex) not found\"\n  warn \"Install manually: texlive (Linux) or mactex/mactex-no-gui (macOS)\"\n  errors=$((errors + 1))\nfi\n\nif command -v ffmpeg >/dev/null 2>&1; then\n  ok \"ffmpeg\"\nelse\n  fail \"ffmpeg not found\"\n  warn \"Install manually: https://ffmpeg.org/download.html\"\n  errors=$((errors + 1))\nfi\n\necho \"\"\nif [ \"$errors\" -eq 0 ]; then\n  echo -e \"${G}All prerequisites satisfied.${N}\"\nelse\n  echo -e \"${R}${errors} prerequisite(s) missing.${N}\"\n  echo -e \"${Y}No packages were installed automatically.${N}\"\nfi\n\necho \"\"\nrm -f /tmp/pi_manim_python_check.txt /tmp/pi_manim_version_check.txt\n",
      "oldText": "#!/usr/bin/env bash\nset -euo pipefail\nG=\"\\033[0;32m\"; R=\"\\033[0;31m\"; N=\"\\033[0m\"\nok() { echo -e \"  ${G}+${N} $1\"; }\nfail() { echo -e \"  ${R}x${N} $1\"; }\necho \"\"; echo \"Manim Video Skill — Setup Check\"; echo \"\"\nerrors=0\ncommand -v python3 &>/dev/null && ok \"Python $(python3 --version 2>&1 | awk '{print $2}')\" || { fail \"Python 3 not found\"; errors=$((errors+1)); }\npython3 -c \"import manim\" 2>/dev/null && ok \"Manim $(manim --version 2>&1 | head -1)\" || { fail \"Manim not installed: pip install manim\"; errors=$((errors+1)); }\ncommand -v pdflatex &>/dev/null && ok \"LaTeX (pdflatex)\" || { fail \"LaTeX not found (macOS: brew install --cask mactex-no-gui)\"; errors=$((errors+1)); }\ncommand -v ffmpeg &>/dev/null && ok \"ffmpeg\" || { fail \"ffmpeg not found\"; errors=$((errors+1)); }\necho \"\"\n[ $errors -eq 0 ] && echo -e \"${G}All prerequisites satisfied.${N}\" || echo -e \"${R}$errors prerequisite(s) missing.${N}\"\necho \"\"\n"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/scripts/setup.sh"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 1 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/scripts/setup.sh.
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "Run `scripts/setup.sh` to verify all dependencies. This lightweight checker uses `uv` to validate Python + Manim executables, then checks LaTeX (`pdflatex`) and ffmpeg. It never auto-installs anything; it only prints manual install guidance when missing. Reference docs tested against Manim CE v0.20.1.",
      "oldText": "Run `scripts/setup.sh` to verify all dependencies. Requires: Python 3.10+, Manim Community Edition v0.20+ (`pip install manim`), LaTeX (`texlive-full` on Linux, `mactex` on macOS), and ffmpeg. Reference docs tested against Manim CE v0.20.1."
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 1 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md.
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "Python 3.10+, uv, Manim CE executable, LaTeX, ffmpeg.\n\n```bash\nbash skills/manim-video/scripts/setup.sh\n```\n\nThe setup script is check-only and prints manual install guidance if something is missing.",
      "oldText": "Python 3.10+, Manim CE (`pip install manim`), LaTeX, ffmpeg.\n\n```bash\nbash skills/creative/manim-video/scripts/setup.sh\n```"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/README.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 1 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/README.md.
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && rg -n \"skills/creative/manim-video|pip install manim\" -S"
}
```

## Assistant

**Result** (success):
```
skills/manim-video/scripts/setup.sh:37:  warn "  - uv pip install manim"
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "stat -f '%Sp %N' ~/dev/pi-manim-video/skills/manim-video/scripts/setup.sh"
}
```

## Assistant

**Result** (success):
```
-rwxr-xr-x /Users/mukund/dev/pi-manim-video/skills/manim-video/scripts/setup.sh
```

## Assistant

### Tool: write

**Args:**
```json
{
  "content": "node_modules/\n.DS_Store\n",
  "path": "/Users/mukund/dev/pi-manim-video/.gitignore"
}
```

## Assistant

**Result** (success):
```
Successfully wrote 24 bytes to /Users/mukund/dev/pi-manim-video/.gitignore
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && git status --short"
}
```

## Assistant

**Result** (success):
```
?? .gitignore
?? README.md
?? extensions/
?? package.json
?? skills/
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && find . -maxdepth 4 -type f | sort"
}
```

## Assistant

**Result** (success):
```
./.git/config
./.git/description
./.git/HEAD
./.git/hooks/applypatch-msg.sample
./.git/hooks/commit-msg.sample
./.git/hooks/fsmonitor-watchman.sample
./.git/hooks/post-update.sample
./.git/hooks/pre-applypatch.sample
./.git/hooks/pre-commit.sample
./.git/hooks/pre-merge-commit.sample
./.git/hooks/pre-push.sample
./.git/hooks/pre-rebase.sample
./.git/hooks/pre-receive.sample
./.git/hooks/prepare-commit-msg.sample
./.git/hooks/push-to-checkout.sample
./.git/hooks/update.sample
./.git/info/exclude
./.gitignore
./extensions/manim-video/index.ts
./package.json
./README.md
./skills/manim-video/README.md
./skills/manim-video/references/animation-design-thinking.md
./skills/manim-video/references/animations.md
./skills/manim-video/references/camera-and-3d.md
./skills/manim-video/references/decorations.md
./skills/manim-video/references/equations.md
./skills/manim-video/references/graphs-and-data.md
./skills/manim-video/references/mobjects.md
./skills/manim-video/references/paper-explainer.md
./skills/manim-video/references/production-quality.md
./skills/manim-video/references/rendering.md
./skills/manim-video/references/scene-planning.md
./skills/manim-video/references/troubleshooting.md
./skills/manim-video/references/updaters-and-trackers.md
./skills/manim-video/references/visual-design.md
./skills/manim-video/scripts/setup.sh
./skills/manim-video/SKILL.md
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && git add . && git commit -m \"feat: port manim-video skill into pi extension with /manim command\" && git push -u origin HEAD",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
[master (root-commit) 9542d3b] feat: port manim-video skill into pi extension with /manim command
 21 files changed, 3201 insertions(+)
 create mode 100644 .gitignore
 create mode 100644 README.md
 create mode 100644 extensions/manim-video/index.ts
 create mode 100644 package.json
 create mode 100644 skills/manim-video/README.md
 create mode 100644 skills/manim-video/SKILL.md
 create mode 100644 skills/manim-video/references/animation-design-thinking.md
 create mode 100644 skills/manim-video/references/animations.md
 create mode 100644 skills/manim-video/references/camera-and-3d.md
 create mode 100644 skills/manim-video/references/decorations.md
 create mode 100644 skills/manim-video/references/equations.md
 create mode 100644 skills/manim-video/references/graphs-and-data.md
 create mode 100644 skills/manim-video/references/mobjects.md
 create mode 100644 skills/manim-video/references/paper-explainer.md
 create mode 100644 skills/manim-video/references/production-quality.md
 create mode 100644 skills/manim-video/references/rendering.md
 create mode 100644 skills/manim-video/references/scene-planning.md
 create mode 100644 skills/manim-video/references/troubleshooting.md
 create mode 100644 skills/manim-video/references/updaters-and-trackers.md
 create mode 100644 skills/manim-video/references/visual-design.md
 create mode 100755 skills/manim-video/scripts/setup.sh
To github.com:maujim/pi-manim-video.git
 * [new branch]      HEAD -> master
branch 'master' set up to track 'origin/master'.
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "pi install git:[REDACTED]:maujim/pi-manim-video",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
Installing git:[REDACTED]:maujim/pi-manim-video...
Cloning into '/Users/mukund/.pi/agent/git/github.com/maujim/pi-manim-video'...
npm warn deprecated node-domexception@1.0.0: Use your platform's native DOMException instead

added 258 packages, and audited 259 packages in 11s

34 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities
Installed git:[REDACTED]:maujim/pi-manim-video
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "pi update git:[REDACTED]:maujim/pi-manim-video",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
Updating git:[REDACTED]:maujim/pi-manim-video...
Updated git:[REDACTED]:maujim/pi-manim-video
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "pi list | rg -n \"pi-manim-video|[REDACTED]:maujim/pi-manim-video\" -n -S"
}
```

## Assistant

**Result** (success):
```
10:  git:[REDACTED]:maujim/pi-manim-video
11:    /Users/mukund/.pi/agent/git/github.com/maujim/pi-manim-video
```

## Assistant

Done — I fully ported `skills/creative/manim-video/` into a new pi package/extension repo and wired `/manim`.

### ✅ What I created

**Repo:** `~/dev/pi-manim-video`  
**Remote:** `[REDACTED]:maujim/pi-manim-video.git` (created via `pnpm dlx gh-axi repo create ... --clone`)  
**Pushed commit:** `9542d3b`

### ✅ What’s inside

- `extensions/manim-video/index.ts`
  - Registers `/manim`
  - Registers `resources_discover` to load bundled skill:
    - `skills/manim-video/SKILL.md`
- Full port of original skill directory:
  - `skills/manim-video/SKILL.md`
  - `skills/manim-video/README.md`
  - `skills/manim-video/references/*` (all files)
  - `skills/manim-video/scripts/setup.sh`

### ✅ Lightweight setup script changes

I replaced the old setup behavior with a lighter checker that:

- Uses `uv` checks for Python + Manim executable
- Checks `pdflatex` and `ffmpeg`
- **Does not auto-install anything**
- Prints manual install guidance only

### ✅ Installed + updated in pi

Executed successfully:

```bash
pi install git:[REDACTED]:maujim/pi-manim-video
pi update git:[REDACTED]:maujim/pi-manim-video
```

And confirmed via `pi list`.

---

If you want, I can now add a `/manim-init` helper command that scaffolds a fresh project folder (`plan.md`, `script.py`, `concat.txt`) in one shot.

## User

out of curiosity is it possible to make it so this skill is dispatched with a specific model from a specific provider??
Let's say in the future I train a model that is fine-tuned to be really good at generating videos like this.

## Assistant

Yes — absolutely possible.

In pi, skills themselves are just prompt content, but your **extension command** (`/manim`) can control model selection before dispatching `/skill:manim-video`.

### Practical pattern

In `/manim` handler:

1. Look up your target model (provider + model id)
2. Call `await pi.setModel(targetModel)`
3. Send `pi.sendUserMessage("/skill:manim-video ...")`

Optionally:
- Save previous model and restore it after the run (on `agent_end`), or
- Keep the session pinned to your video model until user changes it.

### Notes

- This is extension-level behavior, not SKILL.md-native behavior.
- Works great once your finetuned provider/model is added (via built-in model registry or `pi.registerProvider()`).
- You can also make this configurable (`/manim-model set provider/model-id`).

If you want, I can patch your `pi-manim-video` extension to do exactly this with a default + optional auto-restore toggle.

## User

can we add into this manim extension that it was created by porting from the hermes skill
and what commit this was done against?
and also link to the specific subdirs we used? This way we can update if the skill gets updated yk

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd /Users/mukund/src/tries/2026-03-28-hermes-agent && git rev-parse HEAD && git remote -v | head -n 2"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/Users/mukund/dev/pi-manim-video/README.md"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "limit": 40,
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md"
}
```

## Assistant

**Result** (success):
```
e902e55b26aab4658debab070fc1048b22517158
origin	https://github.com/NousResearch/hermes-agent.git (fetch)
origin	https://github.com/NousResearch/hermes-agent.git (push)
```

## Assistant

**Result** (success):
```
# pi-manim-video

Pi extension that ports the full `manim-video` skill (planning guides, references, and scripts) and adds a shortcut command: `/manim`.

## Install

```bash
pi install git:[REDACTED]:maujim/pi-manim-video
```

## Update

```bash
pi update git:[REDACTED]:maujim/pi-manim-video
```

## Usage

Use either:

- `/manim <your request>`
- `/skill:manim-video <your request>`

Examples:

- `/manim Explain gradient descent visually with geometry-first intuition`
- `/manim Create a 4-scene algorithm animation for quicksort`

## Included content

This package ports everything from the original skill directory:

- `skills/manim-video/SKILL.md`
- `skills/manim-video/README.md`
- `skills/manim-video/references/*`
- `skills/manim-video/scripts/setup.sh` (lightweight prerequisite checker)

## Notes

- The setup script only checks prerequisites and prints manual install guidance.
- It does **not** install dependencies automatically.
```

## Assistant

**Result** (success):
```
---
name: manim-video
description: "Production pipeline for mathematical and technical animations using Manim Community Edition. Creates 3Blue1Brown-style explainer videos, algorithm visualizations, equation derivations, architecture diagrams, and data stories. Use when users request: animated explanations, math animations, concept visualizations, algorithm walkthroughs, technical explainers, 3Blue1Brown style videos, or any programmatic animation with geometric/mathematical content."
version: 1.0.0
---

# Manim Video Production Pipeline

## Creative Standard

This is educational cinema. Every frame teaches. Every animation reveals structure.

**Before writing a single line of code**, articulate the narrative arc. What misconception does this correct? What is the "aha moment"? What visual story takes the viewer from confusion to understanding? The user's prompt is a starting point — interpret it with pedagogical ambition.

**Geometry before algebra.** Show the shape first, the equation second. Visual memory encodes faster than symbolic memory. When the viewer sees the geometric pattern before the formula, the equation feels earned.

**First-render excellence is non-negotiable.** The output must be visually clear and aesthetically cohesive without revision rounds. If something looks cluttered, poorly timed, or like "AI-generated slides," it is wrong.

**Opacity layering directs attention.** Never show everything at full brightness. Primary elements at 1.0, contextual elements at 0.4, structural elements (axes, grids) at 0.15. The brain processes visual salience in layers.

**Breathing room.** Every animation needs `self.wait()` after it. The viewer needs time to absorb what just appeared. Never rush from one animation to the next. A 2-second pause after a key reveal is never wasted.

**Cohesive visual language.** All scenes share a color palette, consistent typography sizing, matching animation speeds. A technically correct video where every scene uses random different colors is an aesthetic failure.

## Prerequisites

Run `scripts/setup.sh` to verify all dependencies. This lightweight checker uses `uv` to validate Python + Manim executables, then checks LaTeX (`pdflatex`) and ffmpeg. It never auto-installs anything; it only prints manual install guidance when missing. Reference docs tested against Manim CE v0.20.1.

## Modes

| Mode | Input | Output | Reference |
|------|-------|--------|-----------|
| **Concept explainer** | Topic/concept | Animated explanation with geometric intuition | `references/scene-planning.md` |
| **Equation derivation** | Math expressions | Step-by-step animated proof | `references/equations.md` |
| **Algorithm visualization** | Algorithm description | Step-by-step execution with data structures | `references/graphs-and-data.md` |
| **Data story** | Data/metrics | Animated charts, comparisons, counters | `references/graphs-and-data.md` |
| **Architecture diagram** | System description | Components building up with connections | `references/mobjects.md` |
| **Paper explainer** | Research paper | Key findings and methods animated | `references/scene-planning.md` |
| **3D visualization** | 3D concept | Rotating surfaces, parametric curves, spatial geometry | `references/camera-and-3d.md` |


[225 more lines in file. Use offset=41 to continue.]
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "## Upstream provenance (Hermes skill port)\n\nThis package was originally ported from the Hermes Agent skill:\n\n- Upstream repo: `https://github.com/NousResearch/hermes-agent`\n- Upstream commit used for port: `e902e55b26aab4658debab070fc1048b22517158`\n- Ported from directory: `skills/creative/manim-video/`\n\nDirect upstream links at that pinned commit:\n\n- Skill root: `https://github.com/NousResearch/hermes-agent/tree/e902e55b26aab4658debab070fc1048b22517158/skills/creative/manim-video`\n- References: `https://github.com/NousResearch/hermes-agent/tree/e902e55b26aab4658debab070fc1048b22517158/skills/creative/manim-video/references`\n- Scripts: `https://github.com/NousResearch/hermes-agent/tree/e902e55b26aab4658debab070fc1048b22517158/skills/creative/manim-video/scripts`\n\nSee `UPSTREAM.md` for sync/update notes.\n\n## Notes\n\n- The setup script only checks prerequisites and prints manual install guidance.\n- It does **not** install dependencies automatically.\n",
      "oldText": "## Notes\n\n- The setup script only checks prerequisites and prints manual install guidance.\n- It does **not** install dependencies automatically.\n"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/README.md"
}
```

## Assistant

### Tool: write

**Args:**
```json
{
  "content": "# Upstream Source Tracking\n\nThis package was ported from Hermes Agent's `manim-video` skill.\n\n## Upstream snapshot\n\n- Repository: `https://github.com/NousResearch/hermes-agent`\n- Commit: `e902e55b26aab4658debab070fc1048b22517158`\n- Skill root used: `skills/creative/manim-video/`\n\nPinned links:\n\n- Skill root: https://github.com/NousResearch/hermes-agent/tree/e902e55b26aab4658debab070fc1048b22517158/skills/creative/manim-video\n- `SKILL.md`: https://github.com/NousResearch/hermes-agent/blob/e902e55b26aab4658debab070fc1048b22517158/skills/creative/manim-video/SKILL.md\n- `README.md`: https://github.com/NousResearch/hermes-agent/blob/e902e55b26aab4658debab070fc1048b22517158/skills/creative/manim-video/README.md\n- `references/`: https://github.com/NousResearch/hermes-agent/tree/e902e55b26aab4658debab070fc1048b22517158/skills/creative/manim-video/references\n- `scripts/`: https://github.com/NousResearch/hermes-agent/tree/e902e55b26aab4658debab070fc1048b22517158/skills/creative/manim-video/scripts\n\n## Local mapping\n\n- Upstream `skills/creative/manim-video/SKILL.md` -> local `skills/manim-video/SKILL.md`\n- Upstream `skills/creative/manim-video/README.md` -> local `skills/manim-video/README.md`\n- Upstream `skills/creative/manim-video/references/*` -> local `skills/manim-video/references/*`\n- Upstream `skills/creative/manim-video/scripts/setup.sh` -> local `skills/manim-video/scripts/setup.sh` (adapted to lightweight `uv` checks + manual install guidance)\n\n## Updating from upstream later\n\n1. Compare upstream changes under `skills/creative/manim-video/` against the pinned commit.\n2. Copy updates into `skills/manim-video/` here.\n3. Re-apply local setup script policy (no auto-installs; check-only behavior).\n4. Update this file with the new upstream commit hash.\n5. Run `pi update git:[REDACTED]:maujim/pi-manim-video` after pushing.\n",
  "path": "/Users/mukund/dev/pi-manim-video/UPSTREAM.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 1 block(s) in /Users/mukund/dev/pi-manim-video/README.md.
```

## Assistant

**Result** (success):
```
Successfully wrote 1851 bytes to /Users/mukund/dev/pi-manim-video/UPSTREAM.md
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && git add README.md UPSTREAM.md && git commit -m \"docs: add Hermes upstream provenance and pinned source commit\" && git push",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
[master 22cd475] docs: add Hermes upstream provenance and pinned source commit
 2 files changed, 48 insertions(+)
 create mode 100644 UPSTREAM.md
To github.com:maujim/pi-manim-video.git
   9542d3b..22cd475  master -> master
```

## Assistant

Yep — done ✅

I added explicit provenance tracking to `~/dev/pi-manim-video` and pushed it.

### Added

- **`README.md`** now includes:
  - that this was ported from Hermes
  - exact upstream repo + commit:
    - `e902e55b26aab4658debab070fc1048b22517158`
  - pinned links to:
    - skill root
    - `references/`
    - `scripts/`

- **`UPSTREAM.md`** (new file) with:
  - upstream snapshot metadata
  - local path mapping (upstream → this package)
  - update procedure for future syncs
  - note that `setup.sh` is intentionally adapted (lightweight, no auto-install)

### Pushed

- Commit: `22cd475`
- Repo: `[REDACTED]:maujim/pi-manim-video.git`

If you want, I can also add a tiny `/manim-upstream` command that prints this provenance info in-chat so you can quickly inspect source/commit without opening files.

## User

commit and push this. Does this skill ask the agent to run commands always insiade a fresh uv env, or to use the users env, or to at least do something like `uvx manim`

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && git status --short && rg -n \"uv|uvx|manim -|scripts/setup.sh|pip install|python3|venv\" skills/manim-video/SKILL.md skills/manim-video/README.md skills/manim-video/references -S"
}
```

## Assistant

**Result** (success):
```
skills/manim-video/README.md:19:Python 3.10+, uv, Manim CE executable, LaTeX, ffmpeg.
skills/manim-video/README.md:22:bash skills/manim-video/scripts/setup.sh
skills/manim-video/SKILL.md:27:Run `scripts/setup.sh` to verify all dependencies. This lightweight checker uses `uv` to validate Python + Manim executables, then checks LaTeX (`pdflatex`) and ffmpeg. It never auto-installs anything; it only prints manual install guidance when missing. Reference docs tested against Manim CE v0.20.1.
skills/manim-video/SKILL.md:60:3. **RENDER** — `manim -ql script.py Scene1 Scene2 ...` for draft, `-qh` for production
skills/manim-video/SKILL.md:169:manim -ql script.py Scene1_Introduction Scene2_CoreConcept  # draft
skills/manim-video/SKILL.md:170:manim -qh script.py Scene1_Introduction Scene2_CoreConcept  # production
skills/manim-video/SKILL.md:186:manim -ql --format=png -s script.py Scene2_CoreConcept  # preview still
skills/manim-video/references/rendering.md:6:manim --version       # Manim CE
skills/manim-video/references/rendering.md:14:manim -ql script.py Scene1 Scene2    # draft (480p 15fps)
skills/manim-video/references/rendering.md:15:manim -qm script.py Scene1           # medium (720p 30fps)
skills/manim-video/references/rendering.md:16:manim -qh script.py Scene1           # production (1080p 60fps)
skills/manim-video/references/rendering.md:17:manim -ql --format=png -s script.py Scene1  # preview still (last frame)
skills/manim-video/references/rendering.md:18:manim -ql --format=gif script.py Scene1     # GIF output
skills/manim-video/references/rendering.md:81:manim -ql --resolution 1080,1920 script.py Scene  # 9:16 vertical
skills/manim-video/references/rendering.md:82:manim -ql --resolution 1080,1080 script.py Scene  # 1:1 square
skills/manim-video/references/rendering.md:131:Render individual sections: `manim --save_sections script.py LongVideo`
skills/manim-video/references/rendering.md:141:pip install "manim-voiceover[elevenlabs]"
skills/manim-video/references/rendering.md:143:pip install "manim-voiceover[gtts]"    # Google TTS (free, lower quality)
skills/manim-video/references/rendering.md:144:pip install "manim-voiceover[azure]"   # Azure Cognitive Services
skills/manim-video/references/production-quality.md:167:Before running `manim -qh`:
skills/manim-video/references/troubleshooting.md:113:**Stale output**: `manim -ql --disable_caching script.py Scene`
skills/manim-video/references/troubleshooting.md:131:1. Render a still: `manim -ql -s script.py Scene` -- instant layout check
```

## Assistant

Committed and pushed already ✅  
Latest pushed commit is:

- `22cd475` on `master`  
- repo: `[REDACTED]:maujim/pi-manim-video.git`

And right now: **no uncommitted changes**.

---

On your question: **currently this skill does NOT enforce fresh `uv` env / `uvx manim`.**

What it does today:

- setup checker uses `uv` **only to verify availability**
- skill instructions still render with plain commands like:
  - `manim -ql ...`
  - `manim -qh ...`
- references still include some `pip install ...` snippets for optional voiceover tooling

So execution is effectively in the **user’s existing env / PATH**, unless the model decides otherwise.

If you want, I can patch it so the skill consistently prefers one strategy, e.g.:

1. **Always `uvx manim ...`** (tool-style ephemeral execution), or  
2. **Always `uv run --no-project manim ...`**, or  
3. **Project-local venv workflow** with explicit `uv venv && uv pip install ...` instructions.

## User

prefer 1) when just running commands doing one off renders
prefere 2) and 3) in more mature codebases

## Assistant

### Tool: read

**Args:**
```json
{
  "limit": 80,
  "offset": 20,
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "limit": 220,
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/references/rendering.md"
}
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && rg -n \"manim -|manim --|pip install \\\"manim-voiceover|uv run|uvx\" skills/manim-video -S"
}
```

## Assistant

**Result** (success):
```
**Breathing room.** Every animation needs `self.wait()` after it. The viewer needs time to absorb what just appeared. Never rush from one animation to the next. A 2-second pause after a key reveal is never wasted.

**Cohesive visual language.** All scenes share a color palette, consistent typography sizing, matching animation speeds. A technically correct video where every scene uses random different colors is an aesthetic failure.

## Prerequisites

Run `scripts/setup.sh` to verify all dependencies. This lightweight checker uses `uv` to validate Python + Manim executables, then checks LaTeX (`pdflatex`) and ffmpeg. It never auto-installs anything; it only prints manual install guidance when missing. Reference docs tested against Manim CE v0.20.1.

## Modes

| Mode | Input | Output | Reference |
|------|-------|--------|-----------|
| **Concept explainer** | Topic/concept | Animated explanation with geometric intuition | `references/scene-planning.md` |
| **Equation derivation** | Math expressions | Step-by-step animated proof | `references/equations.md` |
| **Algorithm visualization** | Algorithm description | Step-by-step execution with data structures | `references/graphs-and-data.md` |
| **Data story** | Data/metrics | Animated charts, comparisons, counters | `references/graphs-and-data.md` |
| **Architecture diagram** | System description | Components building up with connections | `references/mobjects.md` |
| **Paper explainer** | Research paper | Key findings and methods animated | `references/scene-planning.md` |
| **3D visualization** | 3D concept | Rotating surfaces, parametric curves, spatial geometry | `references/camera-and-3d.md` |

## Stack

Single Python script per project. No browser, no Node.js, no GPU required.

| Layer | Tool | Purpose |
|-------|------|---------|
| Core | Manim Community Edition | Scene rendering, animation engine |
| Math | LaTeX (texlive/MiKTeX) | Equation rendering via `MathTex` |
| Video I/O | ffmpeg | Scene stitching, format conversion, audio muxing |
| TTS | ElevenLabs / Qwen3-TTS (optional) | Narration voiceover |

## Pipeline

```
PLAN --> CODE --> RENDER --> STITCH --> AUDIO (optional) --> REVIEW
```

1. **PLAN** — Write `plan.md` with narrative arc, scene list, visual elements, color palette, voiceover script
2. **CODE** — Write `script.py` with one class per scene, each independently renderable
3. **RENDER** — `manim -ql script.py Scene1 Scene2 ...` for draft, `-qh` for production
4. **STITCH** — ffmpeg concat of scene clips into `final.mp4`
5. **AUDIO** (optional) — Add voiceover and/or background music via ffmpeg. See `references/rendering.md`
6. **REVIEW** — Render preview stills, verify against plan, adjust

## Project Structure

```
project-name/
  plan.md                # Narrative arc, scene breakdown
  script.py              # All scenes in one file
  concat.txt             # ffmpeg scene list
  final.mp4              # Stitched output
  media/                 # Auto-generated by Manim
    videos/script/480p15/
```

## Creative Direction

### Color Palettes

| Palette | Background | Primary | Secondary | Accent | Use case |
|---------|-----------|---------|-----------|--------|----------|
| **Classic 3B1B** | `#1C1C1C` | `#58C4DD` (BLUE) | `#83C167` (GREEN) | `#FFFF00` (YELLOW) | General math/CS |
| **Warm academic** | `#2D2B55` | `#FF6B6B` | `#FFD93D` | `#6BCB77` | Approachable |
| **Neon tech** | `#0A0A0A` | `#00F5FF` | `#FF00FF` | `#39FF14` | Systems, architecture |
| **Monochrome** | `#1A1A2E` | `#EAEAEA` | `#888888` | `#FFFFFF` | Minimalist |

### Animation Speed

| Context | run_time | self.wait() after |
|---------|----------|-------------------|
| Title/intro appear | 1.5s | 1.0s |
| Key equation reveal | 2.0s | 2.0s |
| Transform/morph | 1.5s | 1.5s |
| Supporting label | 0.8s | 0.5s |
| FadeOut cleanup | 0.5s | 0.3s |
| "Aha moment" reveal | 2.5s | 3.0s |

### Typography Scale

[166 more lines in file. Use offset=100 to continue.]
```

## Assistant

**Result** (success):
```
# Rendering Reference

## Prerequisites

```bash
manim --version       # Manim CE
pdflatex --version    # LaTeX
ffmpeg -version       # ffmpeg
```

## CLI Reference

```bash
manim -ql script.py Scene1 Scene2    # draft (480p 15fps)
manim -qm script.py Scene1           # medium (720p 30fps)
manim -qh script.py Scene1           # production (1080p 60fps)
manim -ql --format=png -s script.py Scene1  # preview still (last frame)
manim -ql --format=gif script.py Scene1     # GIF output
```

## Quality Presets

| Flag | Resolution | FPS | Use case |
|------|-----------|-----|----------|
| `-ql` | 854x480 | 15 | Draft iteration (layout, timing) |
| `-qm` | 1280x720 | 30 | Preview (use for text-heavy scenes) |
| `-qh` | 1920x1080 | 60 | Production |

**Text rendering quality:** `-ql` (480p15) produces noticeably poor text kerning and readability. For scenes with significant text, preview stills at `-qm` to catch issues invisible at 480p. Use `-ql` only for testing layout and animation timing.

## Output Structure

```
media/videos/script/480p15/Scene1_Intro.mp4
media/images/script/Scene1_Intro.png  (from -s flag)
```

## Stitching with ffmpeg

```bash
cat > concat.txt << 'EOF'
file 'media/videos/script/480p15/Scene1_Intro.mp4'
file 'media/videos/script/480p15/Scene2_Core.mp4'
EOF
ffmpeg -y -f concat -safe 0 -i concat.txt -c copy final.mp4
```

## Add Voiceover

```bash
# Mux narration
ffmpeg -y -i final.mp4 -i narration.mp3 -c:v copy -c:a aac -b:a 192k -shortest final_narrated.mp4

# Concat per-scene audio first
cat > audio_concat.txt << 'EOF'
file 'audio/scene1.mp3'
file 'audio/scene2.mp3'
EOF
ffmpeg -y -f concat -safe 0 -i audio_concat.txt -c copy full_narration.mp3
```

## Add Background Music

```bash
ffmpeg -y -i final.mp4 -i music.mp3 \
  -filter_complex "[1:a]volume=0.15[bg];[0:a][bg]amix=inputs=2:duration=shortest" \
  -c:v copy final_with_music.mp4
```

## GIF Export

```bash
ffmpeg -y -i scene.mp4 \
  -vf "fps=15,scale=640:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" \
  output.gif
```

## Aspect Ratios

```bash
manim -ql --resolution 1080,1920 script.py Scene  # 9:16 vertical
manim -ql --resolution 1080,1080 script.py Scene  # 1:1 square
```

## Render Workflow

1. Draft render all scenes at `-ql`
2. Preview stills at key moments (`-s`)
3. Fix and re-render only broken scenes
4. Stitch with ffmpeg
5. Review stitched output
6. Production render at `-qh`
7. Re-stitch + add audio

## manim.cfg — Project Configuration

Create `manim.cfg` in the project directory for per-project defaults:

```ini
[CLI]
quality = low_quality
preview = True
media_dir = ./media

[renderer]
background_color = #0D1117

[tex]
tex_template_file = custom_template.tex
```

This eliminates repetitive CLI flags and `self.camera.background_color` in every scene.

## Sections — Chapter Markers

Mark sections within a scene for organized output:

```python
class LongVideo(Scene):
    def construct(self):
        self.next_section("Introduction")
        # ... intro content ...

        self.next_section("Main Concept")
        # ... main content ...

        self.next_section("Conclusion")
        # ... closing ...
```

Render individual sections: `manim --save_sections script.py LongVideo`
This outputs separate video files per section — useful for long videos where you want to re-render only one part.

## manim-voiceover Plugin (Recommended for Narrated Videos)

The official `manim-voiceover` plugin integrates TTS directly into scene code, auto-syncing animation duration to voiceover length. This is significantly cleaner than the manual ffmpeg muxing approach above.

### Installation

```bash
pip install "manim-voiceover[elevenlabs]"
# Or for free/local TTS:
pip install "manim-voiceover[gtts]"    # Google TTS (free, lower quality)
pip install "manim-voiceover[azure]"   # Azure Cognitive Services
```

### Usage

```python
from manim import *
from manim_voiceover import VoiceoverScene
from manim_voiceover.services.elevenlabs import ElevenLabsService

class NarratedScene(VoiceoverScene):
    def construct(self):
        self.set_speech_service(ElevenLabsService(
            voice_name="Alice",
            model_id="eleven_multilingual_v2"
        ))

        # Voiceover auto-controls scene duration
        with self.voiceover(text="Here is a circle being drawn.") as tracker:
            self.play(Create(Circle()), run_time=tracker.duration)

        with self.voiceover(text="Now let's transform it into a square.") as tracker:
            self.play(Transform(circle, Square()), run_time=tracker.duration)
```

### Key Features

- `tracker.duration` — total voiceover duration in seconds
- `tracker.time_until_bookmark("mark1")` — sync specific animations to specific words
- Auto-generates subtitle `.srt` files
- Caches audio locally — re-renders don't re-generate TTS
- Works with: ElevenLabs, Azure, Google TTS, pyttsx3 (offline), and custom services

### Bookmarks for Precise Sync

```python
with self.voiceover(text='This is a <bookmark mark="circle"/>circle.') as tracker:
    self.wait_until_bookmark("circle")
    self.play(Create(Circle()), run_time=tracker.time_until_bookmark("circle", limit=1))
```

This is the recommended approach for any video with narration. The manual ffmpeg muxing workflow above is still useful for adding background music or post-production audio mixing.
```

## Assistant

**Result** (success):
```
skills/manim-video/references/troubleshooting.md:113:**Stale output**: `manim -ql --disable_caching script.py Scene`
skills/manim-video/references/troubleshooting.md:131:1. Render a still: `manim -ql -s script.py Scene` -- instant layout check
skills/manim-video/SKILL.md:60:3. **RENDER** — `manim -ql script.py Scene1 Scene2 ...` for draft, `-qh` for production
skills/manim-video/SKILL.md:169:manim -ql script.py Scene1_Introduction Scene2_CoreConcept  # draft
skills/manim-video/SKILL.md:170:manim -qh script.py Scene1_Introduction Scene2_CoreConcept  # production
skills/manim-video/SKILL.md:186:manim -ql --format=png -s script.py Scene2_CoreConcept  # preview still
skills/manim-video/scripts/setup.sh:23:if uv run --no-project python --version >/tmp/pi_manim_python_check.txt 2>&1; then
skills/manim-video/scripts/setup.sh:27:  warn "Install Python 3.10+ and make sure 'uv run --no-project python --version' succeeds"
skills/manim-video/scripts/setup.sh:31:if uv run --no-project manim --version >/tmp/pi_manim_version_check.txt 2>&1; then
skills/manim-video/references/production-quality.md:167:Before running `manim -qh`:
skills/manim-video/references/rendering.md:6:manim --version       # Manim CE
skills/manim-video/references/rendering.md:14:manim -ql script.py Scene1 Scene2    # draft (480p 15fps)
skills/manim-video/references/rendering.md:15:manim -qm script.py Scene1           # medium (720p 30fps)
skills/manim-video/references/rendering.md:16:manim -qh script.py Scene1           # production (1080p 60fps)
skills/manim-video/references/rendering.md:17:manim -ql --format=png -s script.py Scene1  # preview still (last frame)
skills/manim-video/references/rendering.md:18:manim -ql --format=gif script.py Scene1     # GIF output
skills/manim-video/references/rendering.md:81:manim -ql --resolution 1080,1920 script.py Scene  # 9:16 vertical
skills/manim-video/references/rendering.md:82:manim -ql --resolution 1080,1080 script.py Scene  # 1:1 square
skills/manim-video/references/rendering.md:131:Render individual sections: `manim --save_sections script.py LongVideo`
skills/manim-video/references/rendering.md:141:pip install "manim-voiceover[elevenlabs]"
skills/manim-video/references/rendering.md:143:pip install "manim-voiceover[gtts]"    # Google TTS (free, lower quality)
skills/manim-video/references/rendering.md:144:pip install "manim-voiceover[azure]"   # Azure Cognitive Services
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "## Execution Mode Policy\n\nUse this command policy when invoking Manim:\n\n- **One-off renders / quick tests** → prefer `uvx manim ...`\n- **Mature project with stable dependencies** → prefer `uv run --no-project manim ...`\n- **Mature codebase needing reproducibility** → prefer project-local env (`uv venv` + pinned deps) and run through that env (`uv run ...`)\n\nDefault to user environment assumptions unless the user asks otherwise, but prefer the policies above.\n\n## Pipeline\n\n```\nPLAN --> CODE --> RENDER --> STITCH --> AUDIO (optional) --> REVIEW\n```\n\n1. **PLAN** — Write `plan.md` with narrative arc, scene list, visual elements, color palette, voiceover script\n2. **CODE** — Write `script.py` with one class per scene, each independently renderable\n3. **RENDER** — Quick draft: `uvx manim -ql script.py Scene1 Scene2 ...`; mature/reproducible projects: `uv run --no-project manim ...` (or project `uv` env)\n4. **STITCH** — ffmpeg concat of scene clips into `final.mp4`\n5. **AUDIO** (optional) — Add voiceover and/or background music via ffmpeg. See `references/rendering.md`\n6. **REVIEW** — Render preview stills, verify against plan, adjust\n",
      "oldText": "## Pipeline\n\n```\nPLAN --> CODE --> RENDER --> STITCH --> AUDIO (optional) --> REVIEW\n```\n\n1. **PLAN** — Write `plan.md` with narrative arc, scene list, visual elements, color palette, voiceover script\n2. **CODE** — Write `script.py` with one class per scene, each independently renderable\n3. **RENDER** — `manim -ql script.py Scene1 Scene2 ...` for draft, `-qh` for production\n4. **STITCH** — ffmpeg concat of scene clips into `final.mp4`\n5. **AUDIO** (optional) — Add voiceover and/or background music via ffmpeg. See `references/rendering.md`\n6. **REVIEW** — Render preview stills, verify against plan, adjust\n"
    },
    {
      "newText": "```bash\nuvx manim -ql script.py Scene1_Introduction Scene2_CoreConcept  # draft / one-off\nuv run --no-project manim -qh script.py Scene1_Introduction Scene2_CoreConcept  # production / mature\n```\n",
      "oldText": "```bash\nmanim -ql script.py Scene1_Introduction Scene2_CoreConcept  # draft\nmanim -qh script.py Scene1_Introduction Scene2_CoreConcept  # production\n```\n"
    },
    {
      "newText": "```bash\nuvx manim -ql --format=png -s script.py Scene2_CoreConcept  # preview still\n```\n",
      "oldText": "```bash\nmanim -ql --format=png -s script.py Scene2_CoreConcept  # preview still\n```\n"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 3 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md.
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "## Prerequisites\n\n```bash\nuvx manim --version                # one-off / ephemeral invocation\nuv run --no-project manim --version  # mature workflow invocation\npdflatex --version                 # LaTeX\nffmpeg -version                    # ffmpeg\n```\n",
      "oldText": "## Prerequisites\n\n```bash\nmanim --version       # Manim CE\npdflatex --version    # LaTeX\nffmpeg -version       # ffmpeg\n```\n"
    },
    {
      "newText": "## CLI Reference\n\n```bash\n# One-off renders (preferred for quick experiments)\nuvx manim -ql script.py Scene1 Scene2                 # draft (480p 15fps)\nuvx manim -ql --format=png -s script.py Scene1        # preview still (last frame)\nuvx manim -ql --format=gif script.py Scene1           # GIF output\n\n# Mature project workflows (preferred for stable codebases)\nuv run --no-project manim -qm script.py Scene1        # medium (720p 30fps)\nuv run --no-project manim -qh script.py Scene1        # production (1080p 60fps)\n```\n",
      "oldText": "## CLI Reference\n\n```bash\nmanim -ql script.py Scene1 Scene2    # draft (480p 15fps)\nmanim -qm script.py Scene1           # medium (720p 30fps)\nmanim -qh script.py Scene1           # production (1080p 60fps)\nmanim -ql --format=png -s script.py Scene1  # preview still (last frame)\nmanim -ql --format=gif script.py Scene1     # GIF output\n```\n"
    },
    {
      "newText": "```bash\nuvx manim -ql --resolution 1080,1920 script.py Scene  # 9:16 vertical\nuvx manim -ql --resolution 1080,1080 script.py Scene  # 1:1 square\n```\n",
      "oldText": "```bash\nmanim -ql --resolution 1080,1920 script.py Scene  # 9:16 vertical\nmanim -ql --resolution 1080,1080 script.py Scene  # 1:1 square\n```\n"
    },
    {
      "newText": "Render individual sections: `uv run --no-project manim --save_sections script.py LongVideo`\n",
      "oldText": "Render individual sections: `manim --save_sections script.py LongVideo`\n"
    },
    {
      "newText": "### Installation\n\n```bash\n# one-off / current env\nuv pip install \"manim-voiceover[elevenlabs]\"\n\n# Or for free/local TTS:\nuv pip install \"manim-voiceover[gtts]\"    # Google TTS (free, lower quality)\nuv pip install \"manim-voiceover[azure]\"   # Azure Cognitive Services\n```\n",
      "oldText": "### Installation\n\n```bash\npip install \"manim-voiceover[elevenlabs]\"\n# Or for free/local TTS:\npip install \"manim-voiceover[gtts]\"    # Google TTS (free, lower quality)\npip install \"manim-voiceover[azure]\"   # Azure Cognitive Services\n```\n"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/references/rendering.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 5 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/references/rendering.md.
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "**Stale output**: `uvx manim -ql --disable_caching script.py Scene`",
      "oldText": "**Stale output**: `manim -ql --disable_caching script.py Scene`"
    },
    {
      "newText": "1. Render a still: `uvx manim -ql -s script.py Scene` -- instant layout check",
      "oldText": "1. Render a still: `manim -ql -s script.py Scene` -- instant layout check"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/references/troubleshooting.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 2 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/references/troubleshooting.md.
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "Before running `uv run --no-project manim -qh` (or your project-local `uv` env equivalent):",
      "oldText": "Before running `manim -qh`:"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/references/production-quality.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 1 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/references/production-quality.md.
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && git status --short && git add skills/manim-video/SKILL.md skills/manim-video/references/rendering.md skills/manim-video/references/troubleshooting.md skills/manim-video/references/production-quality.md && git commit -m \"docs: prefer uvx for one-offs and uv run/project env for mature workflows\" && git push",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
M skills/manim-video/SKILL.md
 M skills/manim-video/references/production-quality.md
 M skills/manim-video/references/rendering.md
 M skills/manim-video/references/troubleshooting.md
[master 798b1c8] docs: prefer uvx for one-offs and uv run/project env for mature workflows
 4 files changed, 37 insertions(+), 21 deletions(-)
To github.com:maujim/pi-manim-video.git
   22cd475..798b1c8  master -> master
```

## Assistant

Done — committed and pushed ✅

- **Commit:** `798b1c8`
- **Message:** `docs: prefer uvx for one-offs and uv run/project env for mature workflows`
- **Repo:** `[REDACTED]:maujim/pi-manim-video.git`

### Answer to your question (now)
Previously it didn’t enforce this clearly.  
Now the skill/docs explicitly guide:

- **One-off renders:** prefer `uvx manim ...`
- **Mature codebases:** prefer `uv run --no-project manim ...`
- **Reproducible mature projects:** prefer project-local `uv` env (venv + pinned deps)

I updated:
- `skills/manim-video/SKILL.md`
- `skills/manim-video/references/rendering.md`
- `skills/manim-video/references/troubleshooting.md`
- `skills/manim-video/references/production-quality.md`

If you want, I can also add a strict `/manim-mode oneoff|mature|repro` command that injects the chosen policy into the session automatically.

## User

commit and push then update Pi

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && git status --short && git log -1 --oneline"
}
```

## Assistant

**Result** (success):
```
798b1c8 docs: prefer uvx for one-offs and uv run/project env for mature workflows
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "pi update git:[REDACTED]:maujim/pi-manim-video",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
Updating git:[REDACTED]:maujim/pi-manim-video...
From github.com:maujim/pi-manim-video
   9542d3b..798b1c8  master     -> origin/master
HEAD is now at 798b1c8 docs: prefer uvx for one-offs and uv run/project env for mature workflows
Removing node_modules/
Removing package-lock.json
npm warn deprecated node-domexception@1.0.0: Use your platform's native DOMException instead

added 258 packages, and audited 259 packages in 8s

34 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities
Updated git:[REDACTED]:maujim/pi-manim-video
```

## Assistant

Done ✅

- Repo was already committed/pushed at:
  - `798b1c8 docs: prefer uvx for one-offs and uv run/project env for mature workflows`
- Ran:
  - `pi update git:[REDACTED]:maujim/pi-manim-video`
- Pi updated successfully to latest remote commit.

## User

ok one more update. when the model is running the render command, teach it to pipe stdout and stderr to a file or to stream it elsewhere. I don't want all the context of the stdout video render to fill up ctx but i also wanna have errors and success to be read to keep the workflows going.

## Assistant

### Tool: read

**Args:**
```json
{
  "limit": 190,
  "offset": 45,
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "limit": 220,
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/references/rendering.md"
}
```

## Assistant

### Tool: read

**Args:**
```json
{
  "path": "/Users/mukund/dev/pi-manim-video/extensions/manim-video/index.ts"
}
```

## Assistant

**Result** (success):
```
| Layer | Tool | Purpose |
|-------|------|---------|
| Core | Manim Community Edition | Scene rendering, animation engine |
| Math | LaTeX (texlive/MiKTeX) | Equation rendering via `MathTex` |
| Video I/O | ffmpeg | Scene stitching, format conversion, audio muxing |
| TTS | ElevenLabs / Qwen3-TTS (optional) | Narration voiceover |

## Execution Mode Policy

Use this command policy when invoking Manim:

- **One-off renders / quick tests** → prefer `uvx manim ...`
- **Mature project with stable dependencies** → prefer `uv run --no-project manim ...`
- **Mature codebase needing reproducibility** → prefer project-local env (`uv venv` + pinned deps) and run through that env (`uv run ...`)

Default to user environment assumptions unless the user asks otherwise, but prefer the policies above.

## Pipeline

```
PLAN --> CODE --> RENDER --> STITCH --> AUDIO (optional) --> REVIEW
```

1. **PLAN** — Write `plan.md` with narrative arc, scene list, visual elements, color palette, voiceover script
2. **CODE** — Write `script.py` with one class per scene, each independently renderable
3. **RENDER** — Quick draft: `uvx manim -ql script.py Scene1 Scene2 ...`; mature/reproducible projects: `uv run --no-project manim ...` (or project `uv` env)
4. **STITCH** — ffmpeg concat of scene clips into `final.mp4`
5. **AUDIO** (optional) — Add voiceover and/or background music via ffmpeg. See `references/rendering.md`
6. **REVIEW** — Render preview stills, verify against plan, adjust

## Project Structure

```
project-name/
  plan.md                # Narrative arc, scene breakdown
  script.py              # All scenes in one file
  concat.txt             # ffmpeg scene list
  final.mp4              # Stitched output
  media/                 # Auto-generated by Manim
    videos/script/480p15/
```

## Creative Direction

### Color Palettes

| Palette | Background | Primary | Secondary | Accent | Use case |
|---------|-----------|---------|-----------|--------|----------|
| **Classic 3B1B** | `#1C1C1C` | `#58C4DD` (BLUE) | `#83C167` (GREEN) | `#FFFF00` (YELLOW) | General math/CS |
| **Warm academic** | `#2D2B55` | `#FF6B6B` | `#FFD93D` | `#6BCB77` | Approachable |
| **Neon tech** | `#0A0A0A` | `#00F5FF` | `#FF00FF` | `#39FF14` | Systems, architecture |
| **Monochrome** | `#1A1A2E` | `#EAEAEA` | `#888888` | `#FFFFFF` | Minimalist |

### Animation Speed

| Context | run_time | self.wait() after |
|---------|----------|-------------------|
| Title/intro appear | 1.5s | 1.0s |
| Key equation reveal | 2.0s | 2.0s |
| Transform/morph | 1.5s | 1.5s |
| Supporting label | 0.8s | 0.5s |
| FadeOut cleanup | 0.5s | 0.3s |
| "Aha moment" reveal | 2.5s | 3.0s |

### Typography Scale

| Role | Font size | Usage |
|------|-----------|-------|
| Title | 48 | Scene titles, opening text |
| Heading | 36 | Section headers within a scene |
| Body | 30 | Explanatory text |
| Label | 24 | Annotations, axis labels |
| Caption | 20 | Subtitles, fine print |

### Fonts

**Use monospace fonts for all text.** Manim's Pango renderer produces broken kerning with proportional fonts at all sizes. See `references/visual-design.md` for full recommendations.

```python
MONO = "Menlo"  # define once at top of file

Text("Fourier Series", font_size=48, font=MONO, weight=BOLD)  # titles
Text("n=1: sin(x)", font_size=20, font=MONO)                  # labels
MathTex(r"\nabla L")                                            # math (uses LaTeX)
```

Minimum `font_size=18` for readability.

### Per-Scene Variation

Never use identical config for all scenes. For each scene:
- **Different dominant color** from the palette
- **Different layout** — don't always center everything
- **Different animation entry** — vary between Write, FadeIn, GrowFromCenter, Create
- **Different visual weight** — some scenes dense, others sparse

## Workflow

### Step 1: Plan (plan.md)

Before any code, write `plan.md`. See `references/scene-planning.md` for the comprehensive template.

### Step 2: Code (script.py)

One class per scene. Every scene is independently renderable.

```python
from manim import *

BG = "#1C1C1C"
PRIMARY = "#58C4DD"
SECONDARY = "#83C167"
ACCENT = "#FFFF00"
MONO = "Menlo"

class Scene1_Introduction(Scene):
    def construct(self):
        self.camera.background_color = BG
        title = Text("Why Does This Work?", font_size=48, color=PRIMARY, weight=BOLD, font=MONO)
        self.add_subcaption("Why does this work?", duration=2)
        self.play(Write(title), run_time=1.5)
        self.wait(1.0)
        self.play(FadeOut(title), run_time=0.5)
```

Key patterns:
- **Subtitles** on every animation: `self.add_subcaption("text", duration=N)` or `subcaption="text"` on `self.play()`
- **Shared color constants** at file top for cross-scene consistency
- **`self.camera.background_color`** set in every scene
- **Clean exits** — FadeOut all mobjects at scene end: `self.play(FadeOut(Group(*self.mobjects)))`

### Step 3: Render

```bash
uvx manim -ql script.py Scene1_Introduction Scene2_CoreConcept  # draft / one-off
uv run --no-project manim -qh script.py Scene1_Introduction Scene2_CoreConcept  # production / mature
```

### Step 4: Stitch

```bash
cat > concat.txt << 'EOF'
file 'media/videos/script/480p15/Scene1_Introduction.mp4'
file 'media/videos/script/480p15/Scene2_CoreConcept.mp4'
EOF
ffmpeg -y -f concat -safe 0 -i concat.txt -c copy final.mp4
```

### Step 5: Review

```bash
uvx manim -ql --format=png -s script.py Scene2_CoreConcept  # preview still
```

## Critical Implementation Notes

### Raw Strings for LaTeX
```python
# WRONG: MathTex("\frac{1}{2}")
# RIGHT:
MathTex(r"\frac{1}{2}")
```

### buff >= 0.5 for Edge Text
```python
label.to_edge(DOWN, buff=0.5)  # never < 0.5
```

### FadeOut Before Replacing Text
```python
self.play(ReplacementTransform(note1, note2))  # not Write(note2) on top
```

### Never Animate Non-Added Mobjects
```python
self.play(Create(circle))  # must add first
self.play(circle.animate.set_color(RED))  # then animate
```

## Performance Targets

| Quality | Resolution | FPS | Speed |
|---------|-----------|-----|-------|
| `-ql` (draft) | 854x480 | 15 | 5-15s/scene |
| `-qm` (medium) | 1280x720 | 30 | 15-60s/scene |
| `-qh` (production) | 1920x1080 | 60 | 30-120s/scene |

Always iterate at `-ql`. Only render `-qh` for final output.

## References

[41 more lines in file. Use offset=235 to continue.]
```

## Assistant

**Result** (success):
```
# Rendering Reference

## Prerequisites

```bash
uvx manim --version                # one-off / ephemeral invocation
uv run --no-project manim --version  # mature workflow invocation
pdflatex --version                 # LaTeX
ffmpeg -version                    # ffmpeg
```

## CLI Reference

```bash
# One-off renders (preferred for quick experiments)
uvx manim -ql script.py Scene1 Scene2                 # draft (480p 15fps)
uvx manim -ql --format=png -s script.py Scene1        # preview still (last frame)
uvx manim -ql --format=gif script.py Scene1           # GIF output

# Mature project workflows (preferred for stable codebases)
uv run --no-project manim -qm script.py Scene1        # medium (720p 30fps)
uv run --no-project manim -qh script.py Scene1        # production (1080p 60fps)
```

## Quality Presets

| Flag | Resolution | FPS | Use case |
|------|-----------|-----|----------|
| `-ql` | 854x480 | 15 | Draft iteration (layout, timing) |
| `-qm` | 1280x720 | 30 | Preview (use for text-heavy scenes) |
| `-qh` | 1920x1080 | 60 | Production |

**Text rendering quality:** `-ql` (480p15) produces noticeably poor text kerning and readability. For scenes with significant text, preview stills at `-qm` to catch issues invisible at 480p. Use `-ql` only for testing layout and animation timing.

## Output Structure

```
media/videos/script/480p15/Scene1_Intro.mp4
media/images/script/Scene1_Intro.png  (from -s flag)
```

## Stitching with ffmpeg

```bash
cat > concat.txt << 'EOF'
file 'media/videos/script/480p15/Scene1_Intro.mp4'
file 'media/videos/script/480p15/Scene2_Core.mp4'
EOF
ffmpeg -y -f concat -safe 0 -i concat.txt -c copy final.mp4
```

## Add Voiceover

```bash
# Mux narration
ffmpeg -y -i final.mp4 -i narration.mp3 -c:v copy -c:a aac -b:a 192k -shortest final_narrated.mp4

# Concat per-scene audio first
cat > audio_concat.txt << 'EOF'
file 'audio/scene1.mp3'
file 'audio/scene2.mp3'
EOF
ffmpeg -y -f concat -safe 0 -i audio_concat.txt -c copy full_narration.mp3
```

## Add Background Music

```bash
ffmpeg -y -i final.mp4 -i music.mp3 \
  -filter_complex "[1:a]volume=0.15[bg];[0:a][bg]amix=inputs=2:duration=shortest" \
  -c:v copy final_with_music.mp4
```

## GIF Export

```bash
ffmpeg -y -i scene.mp4 \
  -vf "fps=15,scale=640:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" \
  output.gif
```

## Aspect Ratios

```bash
uvx manim -ql --resolution 1080,1920 script.py Scene  # 9:16 vertical
uvx manim -ql --resolution 1080,1080 script.py Scene  # 1:1 square
```

## Render Workflow

1. Draft render all scenes at `-ql`
2. Preview stills at key moments (`-s`)
3. Fix and re-render only broken scenes
4. Stitch with ffmpeg
5. Review stitched output
6. Production render at `-qh`
7. Re-stitch + add audio

## manim.cfg — Project Configuration

Create `manim.cfg` in the project directory for per-project defaults:

```ini
[CLI]
quality = low_quality
preview = True
media_dir = ./media

[renderer]
background_color = #0D1117

[tex]
tex_template_file = custom_template.tex
```

This eliminates repetitive CLI flags and `self.camera.background_color` in every scene.

## Sections — Chapter Markers

Mark sections within a scene for organized output:

```python
class LongVideo(Scene):
    def construct(self):
        self.next_section("Introduction")
        # ... intro content ...

        self.next_section("Main Concept")
        # ... main content ...

        self.next_section("Conclusion")
        # ... closing ...
```

Render individual sections: `uv run --no-project manim --save_sections script.py LongVideo`
This outputs separate video files per section — useful for long videos where you want to re-render only one part.

## manim-voiceover Plugin (Recommended for Narrated Videos)

The official `manim-voiceover` plugin integrates TTS directly into scene code, auto-syncing animation duration to voiceover length. This is significantly cleaner than the manual ffmpeg muxing approach above.

### Installation

```bash
# one-off / current env
uv pip install "manim-voiceover[elevenlabs]"

# Or for free/local TTS:
uv pip install "manim-voiceover[gtts]"    # Google TTS (free, lower quality)
uv pip install "manim-voiceover[azure]"   # Azure Cognitive Services
```

### Usage

```python
from manim import *
from manim_voiceover import VoiceoverScene
from manim_voiceover.services.elevenlabs import ElevenLabsService

class NarratedScene(VoiceoverScene):
    def construct(self):
        self.set_speech_service(ElevenLabsService(
            voice_name="Alice",
            model_id="eleven_multilingual_v2"
        ))

        # Voiceover auto-controls scene duration
        with self.voiceover(text="Here is a circle being drawn.") as tracker:
            self.play(Create(Circle()), run_time=tracker.duration)

        with self.voiceover(text="Now let's transform it into a square.") as tracker:
            self.play(Transform(circle, Square()), run_time=tracker.duration)
```

### Key Features

- `tracker.duration` — total voiceover duration in seconds
- `tracker.time_until_bookmark("mark1")` — sync specific animations to specific words
- Auto-generates subtitle `.srt` files
- Caches audio locally — re-renders don't re-generate TTS
- Works with: ElevenLabs, Azure, Google TTS, pyttsx3 (offline), and custom services

### Bookmarks for Precise Sync

```python
with self.voiceover(text='This is a <bookmark mark="circle"/>circle.') as tracker:
    self.wait_until_bookmark("circle")
    self.play(Create(Circle()), run_time=tracker.time_until_bookmark("circle", limit=1))
```

This is the recommended approach for any video with narration. The manual ffmpeg muxing workflow above is still useful for adding background music or post-production audio mixing.
```

## Assistant

**Result** (success):
```
import { dirname, join } from "node:path";
import { fileURLToPath } from "node:url";
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";

const baseDir = dirname(fileURLToPath(import.meta.url));
const skillPath = join(baseDir, "..", "..", "skills", "manim-video", "SKILL.md");

export default function (pi: ExtensionAPI) {
  pi.on("resources_discover", () => {
    return {
      skillPaths: [skillPath],
    };
  });

  pi.registerCommand("manim", {
    description: "Run the manim-video skill for animation planning/code/render workflows",
    handler: async (args, ctx) => {
      const prompt = (args ?? "").trim();
      const command = prompt.length > 0 ? `/skill:manim-video ${prompt}` : "/skill:manim-video";
      pi.sendUserMessage(command);

      if (ctx.hasUI) {
        ctx.ui.notify("Queued manim-video skill.", "info");
      }
    },
  });
}
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "## Render Logging Policy (LLM context safety)\n\nWhen running Manim renders from the agent:\n\n- Always redirect **both stdout and stderr** to a log file (`> \"$LOG\" 2>&1`) by default.\n- Avoid dumping full render logs into chat context.\n- After render, report only: status (`RENDER_OK` / `RENDER_FAIL`), exit code, output video path, and log path.\n- On failure, share only a bounded tail (`tail -n 120 \"$LOG\"`).\n- If live streaming is explicitly needed, use `2>&1 | tee \"$LOG\"` and still summarize output afterward.\n\n## Pipeline\n\n```\nPLAN --> CODE --> RENDER --> STITCH --> AUDIO (optional) --> REVIEW\n```\n\n1. **PLAN** — Write `plan.md` with narrative arc, scene list, visual elements, color palette, voiceover script\n2. **CODE** — Write `script.py` with one class per scene, each independently renderable\n3. **RENDER** — Use `uvx` for one-offs, `uv run --no-project` (or project `uv` env) for mature projects, and always run with log redirection\n4. **STITCH** — ffmpeg concat of scene clips into `final.mp4`\n5. **AUDIO** (optional) — Add voiceover and/or background music via ffmpeg. See `references/rendering.md`\n6. **REVIEW** — Render preview stills, verify against plan, adjust\n",
      "oldText": "## Pipeline\n\n```\nPLAN --> CODE --> RENDER --> STITCH --> AUDIO (optional) --> REVIEW\n```\n\n1. **PLAN** — Write `plan.md` with narrative arc, scene list, visual elements, color palette, voiceover script\n2. **CODE** — Write `script.py` with one class per scene, each independently renderable\n3. **RENDER** — Quick draft: `uvx manim -ql script.py Scene1 Scene2 ...`; mature/reproducible projects: `uv run --no-project manim ...` (or project `uv` env)\n4. **STITCH** — ffmpeg concat of scene clips into `final.mp4`\n5. **AUDIO** (optional) — Add voiceover and/or background music via ffmpeg. See `references/rendering.md`\n6. **REVIEW** — Render preview stills, verify against plan, adjust\n"
    },
    {
      "newText": "### Step 3: Render\n\n```bash\nmkdir -p logs\n\n# draft / one-off\nLOG=\"logs/render-$(date +%Y%m%d-%H%M%S)-draft.log\"\nif uvx manim -ql script.py Scene1_Introduction Scene2_CoreConcept >\"$LOG\" 2>&1; then\n  echo \"RENDER_OK log=$LOG\"\n  tail -n 20 \"$LOG\"\nelse\n  code=$?\n  echo \"RENDER_FAIL exit=$code log=$LOG\"\n  tail -n 120 \"$LOG\"\n  exit $code\nfi\n\n# production / mature\nLOG=\"logs/render-$(date +%Y%m%d-%H%M%S)-prod.log\"\nif uv run --no-project manim -qh script.py Scene1_Introduction Scene2_CoreConcept >\"$LOG\" 2>&1; then\n  echo \"RENDER_OK log=$LOG\"\n  tail -n 20 \"$LOG\"\nelse\n  code=$?\n  echo \"RENDER_FAIL exit=$code log=$LOG\"\n  tail -n 120 \"$LOG\"\n  exit $code\nfi\n```\n",
      "oldText": "### Step 3: Render\n\n```bash\nuvx manim -ql script.py Scene1_Introduction Scene2_CoreConcept  # draft / one-off\nuv run --no-project manim -qh script.py Scene1_Introduction Scene2_CoreConcept  # production / mature\n```\n"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 2 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/SKILL.md.
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "## CLI Reference\n\n```bash\n# One-off renders (preferred for quick experiments)\nuvx manim -ql script.py Scene1 Scene2                 # draft (480p 15fps)\nuvx manim -ql --format=png -s script.py Scene1        # preview still (last frame)\nuvx manim -ql --format=gif script.py Scene1           # GIF output\n\n# Mature project workflows (preferred for stable codebases)\nuv run --no-project manim -qm script.py Scene1        # medium (720p 30fps)\nuv run --no-project manim -qh script.py Scene1        # production (1080p 60fps)\n```\n\n## LLM-safe Render Logging (recommended)\n\nWhen driving renders from an agent, avoid flooding context with full render logs.\n\n```bash\nmkdir -p logs\nLOG=\"logs/render-$(date +%Y%m%d-%H%M%S).log\"\n\nif uvx manim -ql script.py Scene1 >\"$LOG\" 2>&1; then\n  echo \"RENDER_OK log=$LOG\"\n  tail -n 20 \"$LOG\"\nelse\n  code=$?\n  echo \"RENDER_FAIL exit=$code log=$LOG\"\n  tail -n 120 \"$LOG\"\n  exit $code\nfi\n```\n\nFor mature workflows, replace `uvx manim` with `uv run --no-project manim` (or project-local `uv run ...`).\n\nIf you need live monitoring, stream + save:\n\n```bash\nuvx manim -ql script.py Scene1 2>&1 | tee \"$LOG\"\n```\n\nStill summarize with status + log path instead of pasting the full log.\n",
      "oldText": "## CLI Reference\n\n```bash\n# One-off renders (preferred for quick experiments)\nuvx manim -ql script.py Scene1 Scene2                 # draft (480p 15fps)\nuvx manim -ql --format=png -s script.py Scene1        # preview still (last frame)\nuvx manim -ql --format=gif script.py Scene1           # GIF output\n\n# Mature project workflows (preferred for stable codebases)\nuv run --no-project manim -qm script.py Scene1        # medium (720p 30fps)\nuv run --no-project manim -qh script.py Scene1        # production (1080p 60fps)\n```\n"
    },
    {
      "newText": "## Render Workflow\n\n1. Draft render all scenes at `-ql` with stdout/stderr redirected to logs\n2. Preview stills at key moments (`-s`)\n3. Fix and re-render only broken scenes\n4. Stitch with ffmpeg\n5. Review stitched output\n6. Production render at `-qh` with log redirection\n7. Re-stitch + add audio\n",
      "oldText": "## Render Workflow\n\n1. Draft render all scenes at `-ql`\n2. Preview stills at key moments (`-s`)\n3. Fix and re-render only broken scenes\n4. Stitch with ffmpeg\n5. Review stitched output\n6. Production render at `-qh`\n7. Re-stitch + add audio\n"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/skills/manim-video/references/rendering.md"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 2 block(s) in /Users/mukund/dev/pi-manim-video/skills/manim-video/references/rendering.md.
```

## Assistant

### Tool: edit

**Args:**
```json
{
  "edits": [
    {
      "newText": "  pi.registerCommand(\"manim\", {\n    description: \"Run the manim-video skill for animation planning/code/render workflows\",\n    handler: async (args, ctx) => {\n      const prompt = (args ?? \"\").trim();\n      const renderPolicy =\n        \"Render policy: redirect stdout/stderr to a logfile by default, report status + exit code + artifact/log paths, and only include a bounded tail on failure.\";\n      const mergedPrompt = prompt.length > 0 ? `${prompt} ${renderPolicy}` : renderPolicy;\n      const command = `/skill:manim-video ${mergedPrompt}`;\n      pi.sendUserMessage(command);\n\n      if (ctx.hasUI) {\n        ctx.ui.notify(\"Queued manim-video skill.\", \"info\");\n      }\n    },\n  });\n}",
      "oldText": "  pi.registerCommand(\"manim\", {\n    description: \"Run the manim-video skill for animation planning/code/render workflows\",\n    handler: async (args, ctx) => {\n      const prompt = (args ?? \"\").trim();\n      const command = prompt.length > 0 ? `/skill:manim-video ${prompt}` : \"/skill:manim-video\";\n      pi.sendUserMessage(command);\n\n      if (ctx.hasUI) {\n        ctx.ui.notify(\"Queued manim-video skill.\", \"info\");\n      }\n    },\n  });\n}"
    }
  ],
  "path": "/Users/mukund/dev/pi-manim-video/extensions/manim-video/index.ts"
}
```

## Assistant

**Result** (success):
```
Successfully replaced 1 block(s) in /Users/mukund/dev/pi-manim-video/extensions/manim-video/index.ts.
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && git status --short && rg -n \"Render policy|LLM-safe Render Logging|redirect stdout/stderr|tail -n 120|tee\" skills/manim-video extensions/manim-video -S"
}
```

## Assistant

**Result** (success):
```
M extensions/manim-video/index.ts
 M skills/manim-video/SKILL.md
 M skills/manim-video/references/rendering.md
extensions/manim-video/index.ts:20:        "Render policy: redirect stdout/stderr to a logfile by default, report status + exit code + artifact/log paths, and only include a bounded tail on failure.";
skills/manim-video/SKILL.md:69:- On failure, share only a bounded tail (`tail -n 120 "$LOG"`).
skills/manim-video/SKILL.md:70:- If live streaming is explicitly needed, use `2>&1 | tee "$LOG"` and still summarize output afterward.
skills/manim-video/SKILL.md:199:  tail -n 120 "$LOG"
skills/manim-video/SKILL.md:211:  tail -n 120 "$LOG"
skills/manim-video/references/updaters-and-trackers.md:11:## ValueTracker: an invisible steering wheel
skills/manim-video/references/rendering.md:25:## LLM-safe Render Logging (recommended)
skills/manim-video/references/rendering.md:39:  tail -n 120 "$LOG"
skills/manim-video/references/rendering.md:49:uvx manim -ql script.py Scene1 2>&1 | tee "$LOG"
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "cd ~/dev/pi-manim-video && git add extensions/manim-video/index.ts skills/manim-video/SKILL.md skills/manim-video/references/rendering.md && git commit -m \"feat: enforce render log redirection to keep Manim output out of context\" && git push",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
[master 0389ed9] feat: enforce render log redirection to keep Manim output out of context
 3 files changed, 71 insertions(+), 6 deletions(-)
To github.com:maujim/pi-manim-video.git
   798b1c8..0389ed9  master -> master
```

## Assistant

### Tool: bash

**Args:**
```json
{
  "command": "pi update git:[REDACTED]:maujim/pi-manim-video",
  "timeout": 120
}
```

## Assistant

**Result** (success):
```
Updating git:[REDACTED]:maujim/pi-manim-video...
From github.com:maujim/pi-manim-video
   798b1c8..0389ed9  master     -> origin/master
HEAD is now at 0389ed9 feat: enforce render log redirection to keep Manim output out of context
Removing node_modules/
Removing package-lock.json
npm warn deprecated node-domexception@1.0.0: Use your platform's native DOMException instead

added 258 packages, and audited 259 packages in 8s

34 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities
Updated git:[REDACTED]:maujim/pi-manim-video
```

## Assistant

Done ✅ — updated, committed, pushed, and refreshed in Pi.

### What changed

I taught the extension/skill to keep render logs out of LLM context by default:

1. **`skills/manim-video/SKILL.md`**
   - Added **Render Logging Policy**:
     - always redirect `stdout+stderr` to log file
     - report only status/exit/artifact/log paths
     - include only bounded tail on failure (`tail -n 120`)
     - use `tee` only when explicit live streaming is needed

2. **`skills/manim-video/references/rendering.md`**
   - Added **LLM-safe Render Logging** section with concrete command templates (`uvx` + log redirection, failure handling, optional `tee` streaming).

3. **`extensions/manim-video/index.ts`**
   - `/manim` now appends a short render-policy instruction every time it dispatches `/skill:manim-video`, to reinforce this behavior operationally.

---

### Git + Pi update

- **Commit:** `0389ed9`
- **Message:** `feat: enforce render log redirection to keep Manim output out of context`
- **Pushed:** `origin/master`
- Ran:
  - `pi update git:[REDACTED]:maujim/pi-manim-video`
- Update completed successfully.
