OpenCode Architecture Analysis
TL;DR: OpenCode exists in two flavors: an archived Go version with Bubble Tea TUI, and an active TypeScript rewrite with client/server split and 75+ LLM providers. Neither has multi-agent team coordination. The architecture is clean, the provider system is comprehensive, but if you want agents that actually work together you'll need to build it yourself on top of the existing SDK and plugin system.
OpenCode Architecture Deep-Dive
Status: COMPLETE
CRITICAL FINDING: Two Different Codebases
| opencode-ai/opencode (archived) | anomalyco/opencode (active) | |
|---|---|---|
| Language | Go | TypeScript (50.7%) + MDX (45.1%) |
| Status | Archived Sep 2025 | Active (v1.2.5, Feb 2026) |
| Successor | charmbracelet/crush | This IS the continuation |
| Structure | cmd/ + internal/ (canonical Go) | packages/ monorepo |
| Stars | N/A (archived) | 105k |
| Runtime | Go binary | Bun (JS runtime) |
Implication: The Go codebase is the archived original (opencode-ai/opencode). The active fork rewrote everything in TypeScript/Bun. Anyone wanting a Go-based starting point would need the archived version or charmbracelet/crush (the original author's Go successor).
1. Original Go Codebase (opencode-ai/opencode)
Repository Structure
opencode-ai/opencode/
├── main.go # Entry point
├── cmd/ # CLI commands
├── internal/ # Core application logic
├── scripts/ # Utility scripts
├── go.mod / go.sum # Go dependencies
├── sqlc.yaml # SQL code generation (type-safe DB)
├── .opencode.json # Config example
├── opencode-schema.json # JSON Schema for config
├── .goreleaser.yml # Cross-platform release config
└── install # Install script
Key Tech Stack
- Bubble Tea for TUI (Elm Architecture: Model-Update-View)
- SQLite for persistence (sessions, messages, file history)
- sqlc for type-safe SQL generation
- goreleaser for cross-platform builds
- LSP integration for code intelligence
- MCP servers for external tools
2. Active TypeScript Codebase (anomalyco/opencode)
Monorepo Structure
anomalyco/opencode/
├── packages/
│ ├── opencode/ # CORE - server + TUI + agent + tools
│ ├── sdk/ # @opencode-ai/sdk - client/server primitives
│ ├── plugin/ # @opencode-ai/plugin - extensibility
│ ├── app/ # Application shell
│ ├── console/ # Console interface
│ ├── desktop/ # Tauri desktop app
│ ├── web/ # Web client
│ ├── ui/ # Shared UI components
│ ├── containers/ # Container support
│ ├── enterprise/ # Enterprise features
│ ├── identity/ # Auth/identity
│ ├── slack/ # Slack integration
│ ├── function/ # Utility functions
│ ├── util/ # Shared utilities
│ ├── script/ # Build scripts
│ ├── docs/ # Documentation (MDX)
│ └── extensions/zed/ # Zed editor extension
├── sdks/vscode/ # VS Code extension
├── infra/ # Infrastructure
├── nix/ # Nix packaging
└── specs/ # Specifications
Core Package Source (packages/opencode/src/) - 37 modules:
acp/ agent/ auth/ bun/ bus/
cli/ command/ config/ control/ env/
file/ flag/ format/ global/ id/
ide/ installation/ lsp/ mcp/ patch/
permission/ plugin/ project/ provider/ pty/
question/ scheduler/ server/ session/ share/
shell/ skill/ snapshot/ storage/ tool/
util/ worktree/
Key Technologies
| Tech | Purpose |
|---|---|
| Bun | JS runtime + package manager |
| Hono | HTTP server framework |
| SolidJS | Reactive UI (TUI, web, desktop) |
| @opentui/solid | Terminal rendering (60 FPS) |
| Tauri | Desktop app (Rust) |
| AI SDK (Vercel) | LLM provider abstraction |
| Drizzle ORM | Database access |
| Nix | Reproducible builds |
3. Client/Server Architecture
Server Layer (packages/opencode/src/server/)
- HTTP API built with Hono framework
- OpenAPI 3.1 specification (80+ endpoints)
- Real-time updates via Server-Sent Events (SSE)
- Handles: session orchestration, provider management, tool execution
- Default:
127.0.0.1:4096, viewable spec at/doc - mDNS discovery for local network access
- Basic auth via
OPENCODE_SERVER_PASSWORDenv var
API Categories
| Category | Endpoints |
|---|---|
| Global | Health checks, SSE event streaming |
| Sessions | CRUD, fork, share, abort, init, permission mgmt |
| Messages | Send/receive prompts, async variants |
| Files | Search, browse, read workspace content |
| Providers | Auth methods, OAuth flows |
| Tools | LSP, formatters, MCP server integration |
| TUI Control | Remote prompt manipulation, dialog control |
| Projects & VCS | Project info, version control details |
Session API Routes (core)
GET /session # List sessions (filter by dir, search, timestamp)
POST /session # Create new session
GET /session/:id # Get session details
PATCH /session/:id # Update (title, archive)
DELETE /session/:id # Delete session + data
POST /session/:id/fork # Fork at specific message
POST /session/:id/abort # Stop AI processing
POST /session/:id/share # Generate share link
POST /session/:id/message # Send message, stream AI response
POST /session/:id/command # Execute AI command
POST /session/:id/shell # Run shell command
GET /session/:id/message # Get all messages
GET /session/:id/message/:msgID # Get specific message
GET /session/status # All session statuses
Client Layer
- TUI: Direct RPC to embedded server (no HTTP overhead)
- Web: Browser app via SDK over HTTP
- Desktop: Tauri wrapping web interface
- IDE: VS Code/Cursor plugins via SDK
SDK (@opencode-ai/sdk)
// Full server + client
const { client } = await createOpencode({ port: 4096 })
// Client-only (connect to running server)
const client = createOpencodeClient({ baseUrl: "http://localhost:4096" })
// Event subscriptions (real-time via SSE)
const events = await client.event.subscribe()
for await (const event of events.stream) {
console.log(event.type, event.properties)
}
// Structured output with JSON Schema validation
const result = await client.session.prompt({
body: { parts: [{ type: "text", text: "..." }],
format: { type: "json_schema", schema: {...} } }
})
Data Flow
User Input → Client (TUI/Web/Desktop)
→ POST /session/:id/message (via SDK)
→ Server creates message, broadcasts SSE update
→ Plugins process via hooks
→ Provider manager creates AI model instance
→ Agent loop begins streaming to LLM
→ Tool calls → permission checks → execution
→ Results stream back to AI for continued reasoning
→ Final response → SSE → all connected clients
4. Agentic Execution Loop
Core intelligence in packages/opencode/src/session/llm.ts → SessionPrompt.loop():
Loop Phases:
- Context Assembly: Load session history, resolve tools, check token limits
- Compaction (if needed): Summarize older messages to fit context window
- Streaming Request:
streamText()from Vercel AI SDK - Response Processing:
- Stream reasoning/thinking (when model supports it)
- Parse tool calls from AI output
- Execute permission checks against agent rules
- Run approved tools, capture results
- Inject results back into conversation
- Iteration: Continue until AI signals completion (no more tool calls)
streamText() Call Details
streamText({
// Tools: all registered tools minus "invalid"
activeTools: Object.keys(tools).filter(x => x !== "invalid"),
// Middleware: transform prompts per-provider
middleware: [{
async transformParams(args) {
if (args.type === "stream") {
args.params.prompt = ProviderTransform.message(...)
}
return args.params
}
}],
// Error recovery: case-insensitive tool name matching
experimental_repairToolCall: handler,
// LiteLLM workaround: inject _noop tool when needed
})
Agent Types
- Build Agent: Full filesystem access, auto-execute tools
- Plan Agent: Denies file edits, requires permission for bash
- Custom Agents: Configurable via
opencode.jsonor.opencode/agents/
Agent Source Structure
packages/opencode/src/agent/
├── agent.ts # Agent definition and execution
├── prompt/ # System prompts per agent type
└── generate.txt # Generation instructions
Subagents
- General: Full tool access, spawned for multi-step tasks
- Explore: Read-only, for fast codebase exploration
- Invoked via
@mentions or automatically by the main agent - No team coordination - subagents are isolated workers, no messaging
5. Provider System
Architecture (packages/opencode/src/provider/)
provider/
├── provider.ts # Main provider implementation + registry
├── models.ts # Model definitions and capabilities
├── auth.ts # Provider authentication
├── error.ts # Error handling
├── transform.ts # Message format transformation per provider
└── sdk/copilot/ # GitHub Copilot SDK integration
How 75+ Providers Work
- Vercel AI SDK provides the abstraction layer (
streamText(),generateText()) - Models.dev metadata service for model capabilities, pricing, config
Providerclass hierarchy transforms model-specific message formats- Each provider handles its own streaming implementation
transform.tshandles provider-specific message format differences
Authentication Methods
- Environment variables (
ANTHROPIC_API_KEY,OPENAI_API_KEY, etc.) - Config files (
~/.config/opencode/opencode.json) - JSON credential storage (
~/.local/share/opencode/auth.json) - OAuth flows (GitHub Copilot, GitLab Duo, OpenAI ChatGPT Plus)
- AWS credential chains (Bedrock)
Provider Categories
- Direct: Anthropic, OpenAI, Google, xAI, DeepSeek
- Cloud: AWS Bedrock, Azure OpenAI, Google Vertex AI
- Aggregators: OpenRouter, Cloudflare AI, Vercel AI Gateway
- Local: Ollama, LM Studio, llama.cpp
- First-party: OpenCode Zen (curated, pre-tested models)
6. Tool System
Source Structure (packages/opencode/src/tool/) - 44 files:
Core: File ops: Search:
├── tool.ts (interface) ├── read.ts ├── grep.ts
├── registry.ts (aggregator) ├── write.ts ├── glob.ts
├── truncation.ts ├── edit.ts ├── codesearch.ts
├── invalid.ts ├── multiedit.ts ├── ls.ts
│ ├── apply_patch.ts │
Execution: │ Web:
├── bash.ts Agent: ├── webfetch.ts
├── lsp.ts ├── task.ts ├── websearch.ts
├── plan.ts ├── todo.ts │
├── question.ts ├── batch.ts Other:
├── skill.ts │ ├── external-directory.ts
│ │
Each .ts has a matching .txt with LLM-facing documentation
Tool Interface (Zod-based)
Tool.define()registers tools with parameter schemas via Zod.txtfiles contain descriptions/documentation fed to the LLMregistry.tsaggregates all tool sources into unified registry
Tool Sources (4 types)
- Built-in: bash, read, write, edit, grep, glob, ls, task, etc.
- User Plugins: TypeScript/JavaScript in
.opencode/plugins/or~/.config/opencode/plugins/ - MCP Servers: Via stdio or HTTP protocols
- Skills: Markdown docs in
.opencode/skill/directories
Permission System
- Per-agent rules: Build agent = auto-allow, Plan agent = deny writes
- Per-tool granularity:
ask,allow,deny - Glob patterns for bash command matching
- User consent prompts for sensitive operations
Tool Execution Flow
AI requests tool(name, params)
→ Registry resolves tool implementation
→ Zod validates parameters
→ Permission system checks agent rules + user consent
→ Tool executes
→ Result returned to AI for next iteration
7. TUI Architecture (TypeScript version)
Implementation
- Built with @opentui/solid (SolidJS for terminals)
- 60 FPS rendering using worker threads (offloaded rendering)
- Kitty keyboard protocol for advanced key handling
- Direct RPC to embedded server (no HTTP overhead)
- "Thin client" - delegates ALL business logic to server
State Management
Three provider layers:
- LocalProvider - UI preferences (theme, keybinds, agent/model selection)
- Persisted to disk via
Bun.write() - Model selection has 6-level fallback chain
- Persisted to disk via
- SDKProvider - HTTP client + SSE event stream to server
- SyncProvider - Reactive mirror of server state
- Lifecycle:
loading → partial → complete - Syncs: sessions, messages, parts, permissions, todos, providers, etc.
- Lifecycle:
Route System
/→ Home (landing page with centered prompt)/session/:id→ Active session (messages + prompt)
Session View Components
- Header: Session title, token usage, cost, version
- Main: Scrollable message viewport with inline diffs
- Sidebar (42 cols): MCP servers, LSP status, todos, modified files
- Footer: Current dir, permission count, server status
- Prompt: User input with autocomplete
Command System (~45 commands)
Distributed registration pattern:
- Components register commands → aggregated into flat list
- Filtered by enabled/hidden flags
- Accessible via: keybinds, command palette (
Ctrl+P), slash commands
| Category | ~Count | Examples |
|---|---|---|
| Session | 15 | share, rename, timeline, fork, undo/redo |
| Prompt | 6 | clear, submit, stash, editor |
| Agent | 10 | switch agent/model/variant, cycle |
| Provider | 2 | connect, authenticate |
| System | 12 | help, status, theme, console, exit |
Keybind System
- Leader key:
Ctrl+X(activates 2-second leader mode) - Direct bindings: Single key combos
- Leader bindings:
Ctrl+X→ second key
Additional UI Features
- Clipboard: OSC 52 → platform utils → clipboardy (fallback chain)
- Toast notifications: Top-right, auto-dismiss, one at a time
- Scroll: Sticky scroll, message-boundary jumping, viewport tracking
- Dialogs: Modal stack with backdrop,
Escapeto close
Note on Go TUI
The Go version uses Bubble Tea (Elm Architecture). Different framework, similar UX patterns. appModel struct implements tea.Model with Init()/Update()/View().
8. MCP Integration (packages/opencode/src/mcp/)
- Stdio Protocol: Tools communicate via stdin/stdout (local processes)
- HTTP Protocol: Tools via HTTP endpoints (remote/hosted)
- MCP servers advertise tools with parameter schemas
- Tool registry integrates MCP tools alongside built-in tools
- Runtime management via REST API endpoints
- Configured in
opencode.jsonundermcpkey
9. Plugin System (@opencode-ai/plugin)
Capabilities
- Intercept tool execution (before/after hooks)
- Modify LLM parameters (temperature, system prompt, etc.)
- Add custom tools with Zod parameter schemas
- Subscribe to events:
message.updated,chat.params,tool.execute - Inject environment variables
Plugin Locations
.opencode/plugins/(project-level)~/.config/opencode/plugins/(global)- npm packages specified in config
Hook System
Plugins register handlers for lifecycle events during request processing. Events flow through the hook chain, allowing cross-cutting concerns (logging, monitoring, parameter modification).
10. CLI Commands (packages/opencode/src/cli/cmd/)
Core: Integration: Management:
├── run.ts ├── github.ts ├── auth.ts
├── serve.ts ├── mcp.ts ├── db.ts
├── session.ts ├── acp.ts ├── stats.ts
├── agent.ts ├── pr.ts ├── upgrade.ts
├── models.ts ├── web.ts ├── uninstall.ts
├── generate.ts │ │
├── export.ts Subdirectories: │
├── import.ts ├── debug/ │
├── cmd.ts └── tui/ │
11. Storage Architecture
Global Data (~/.local/share/opencode/)
auth.json- Authentication tokens- Logs, cached plugins
Project Data (~/.local/share/opencode/project/<slug>/storage/)
- SQLite database for sessions, messages
- Part data (text, tool results, files)
- Atomic reads/writes with filesystem-based locking
Git Worktrees
Project-specific sandbox environments for safe file modifications with rollback capability.
12. What's Missing: Multi-Agent / Team Coordination
OpenCode has no team/swarm coordination. Key gaps:
| Claude Code Has | OpenCode Has |
|---|---|
| Agent teams with shared task lists | Single-agent with isolated subagents |
| Inter-agent messaging (DMs, broadcast) | No agent communication |
| Lead-coordinator architecture | No hierarchy |
| Task dependency management | Simple todo list |
| Delegate mode (coordination-only) | N/A |
| Plan approval workflow | N/A |
| Quality gates via hooks | Plugin hooks (different scope) |
What Would Be Needed for Teams
- Team manager - orchestrates multiple agent instances
- Shared task store - SQLite table for task lists with dependencies
- Message bus - inter-agent communication (existing
bus/module is a local event emitter, would need to extend for cross-session communication) - Agent roles - lead, worker, reviewer with different tool permissions
- Coordination protocol - who works on what, blocking/unblocking, plan approval
- TUI integration - split panes or tabs showing team activity
13. Architectural Comparison
| Decision | Go (archived) | TypeScript (active) | Notes |
|---|---|---|---|
| Language | Go | TypeScript | Go version is archived |
| TUI | Bubble Tea | @opentui/solid | Bubble Tea is more mature |
| Persistence | SQLite + sqlc | SQLite + Drizzle | Both use type-safe SQL generation |
| Provider SDK | Direct HTTP | Vercel AI SDK | AI SDK patterns are more portable |
| HTTP Server | N/A (monolithic) | Hono | Client/server split enables multi-client |
| Architecture | Monolithic | Client/Server | Client/server is better for teams |
| Agent teams | N/A | N/A | Neither has team support |
| Config | JSON | JSON/JSONC | JSON with schema validation |
Hypothetical Team-Enabled Architecture
┌─────────────────────────────────────────────┐
│ TUI (Bubble Tea) │
│ Sessions | Team Dashboard | Split Panes │
├─────────────────────────────────────────────┤
│ HTTP API (chi/net/http) │
│ OpenAPI 3.1 | SSE | Session Mgmt │
├─────────────┬───────────────────────────────┤
│ Agent Loop │ Team Coordinator │
│ (per agent) │ Task store | Message bus │
│ │ Role mgmt | Plan approval │
├─────────────┴───────────────────────────────┤
│ Provider Abstraction │
│ Anthropic | OpenAI | Google | Local │
├─────────────────────────────────────────────┤
│ Tool Registry + Permissions │
│ Built-in | MCP | Plugins │
├─────────────────────────────────────────────┤
│ SQLite (sqlc) │
│ Sessions | Messages | Tasks | Team State │
└─────────────────────────────────────────────┘
APPENDIX A: Go Source-Level Details
Actual Go interfaces and types from the archived opencode-ai/opencode.
Go internal/ Package Map
internal/
├── app/ # Application orchestrator (wires everything together)
├── completions/ # Shell completions
├── config/ # Configuration management
├── db/ # SQLite database layer (sqlc generated)
├── diff/ # Diff computation
├── fileutil/ # File utilities
├── format/ # Output formatting
├── history/ # File change tracking
├── llm/
│ ├── agent/ # Agent loop (THE CORE)
│ ├── models/ # Model registry + definitions
│ ├── prompt/ # System prompt construction
│ ├── provider/ # LLM provider abstractions
│ └── tools/ # Tool implementations
│ └── shell/ # Shell execution
├── logging/ # Structured logging
├── lsp/ # Language Server Protocol client
├── message/ # Message persistence + types
├── permission/ # Permission management (pub/sub)
├── pubsub/ # Generic pub/sub event system
├── session/ # Session lifecycle
├── tui/
│ ├── tui.go # Main TUI model (appModel)
│ ├── components/ # Reusable UI components
│ ├── image/ # Terminal image rendering
│ ├── layout/ # Layout helpers (PlaceOverlay)
│ ├── page/ # Page views (Chat, Logs)
│ ├── styles/ # Styling constants
│ ├── theme/ # Theme management
│ └── util/ # TUI utilities (CmdHandler, etc.)
└── version/ # Version info
Agent Loop Interface (internal/llm/agent/)
type AgentEventType string
const (
AgentEventTypeError AgentEventType = "error"
AgentEventTypeResponse AgentEventType = "response"
AgentEventTypeSummarize AgentEventType = "summarize"
)
type AgentEvent struct {
Type AgentEventType
Message *message.Message
Error error
SessionID string
Progress int
Done bool
}
type Service interface {
Model() models.Model
Run(ctx context.Context, sessionID string, content string,
attachments []message.Attachment) chan AgentEvent
Cancel()
IsSessionBusy(sessionID string) bool
IsBusy() bool
Update(model models.Model)
Summarize(ctx context.Context, sessionID string) chan AgentEvent
}
Agent loop cycle (processGeneration):
- Load message history via
messages.List(sessionID) - Launch async title generation for first message
- Handle summary context (truncate from summary point)
- Create user message, persist to DB
- Agentic loop: Call
streamAndHandleEvents()repeatedly untilFinishReason != FinishReasonToolUse - Each iteration: append assistant message + tool results, re-send to LLM
streamAndHandleEvents handles these event types:
EventThinkingDelta→ reasoning contentEventContentDelta→ text contentEventToolUseStart/Delta/Stop→ tool call accumulationEventComplete→ usage tracking, finish reasonEventError→ error handling
Concurrency: activeRequests sync.Map maps sessionID → context.CancelFunc
Provider Interface (internal/llm/provider/)
type Provider interface {
SendMessages(ctx context.Context, messages []message.Message,
tools []tools.BaseTool) (*ProviderResponse, error)
StreamResponse(ctx context.Context, messages []message.Message,
tools []tools.BaseTool) <-chan ProviderEvent
Model() models.Model
}
Provider files: anthropic.go, openai.go, gemini.go, bedrock.go,
azure.go, copilot.go, vertexai.go
OpenAI-compatible providers reuse the client:
case models.ProviderGROQ:
clientOptions.openaiOptions = append(clientOptions.openaiOptions,
WithOpenAIBaseURL("https://api.groq.com/openai/v1"))
return &baseProvider[OpenAIClient]{...}
Generic provider wrapper:
type baseProvider[C ProviderClient] struct {
options providerClientOptions
client C
}
Model Registry (internal/llm/models/)
type Model struct {
ID, Name string
Provider ModelProvider
APIModel string
CostPer1MIn/Out float64
ContextWindow int64
DefaultMaxTokens int64
CanReason bool
SupportsAttachments bool
}
var SupportedModels = map[ModelID]Model{...}
func init() {
maps.Copy(SupportedModels, AnthropicModels)
maps.Copy(SupportedModels, OpenAIModels)
maps.Copy(SupportedModels, GeminiModels)
// ...9 provider model maps merged
}
Tool Interface (internal/llm/tools/)
type BaseTool interface {
Info() ToolInfo
Run(ctx context.Context, params ToolCall) (ToolResponse, error)
}
type ToolInfo struct {
Name string
Description string
Parameters map[string]any // JSON Schema
Required []string
}
type ToolCall struct {
ID string `json:"id"`
Name string `json:"name"`
Input string `json:"input"` // raw JSON
}
type ToolResponse struct {
Type toolResponseType `json:"type"` // "text" or "image"
Content string `json:"content"`
Metadata string `json:"metadata,omitempty"`
IsError bool `json:"is_error"`
}
Tool files: bash.go, edit.go, write.go, view.go, glob.go,
grep.go, ls.go, patch.go, fetch.go, diagnostics.go,
sourcegraph.go, file.go, shell/shell.go
TUI Model (internal/tui/tui.go)
type appModel struct {
width, height int
currentPage page.PageID
pages map[page.PageID]tea.Model
loadedPages map[page.PageID]bool
status core.StatusCmp
app *app.App
// Dialog flags
showPermissions, showSessionDialog, showCommandDialog,
showModelDialog, showInitDialog, showThemeDialog,
showFilepicker, showMultiArgumentsDialog, showHelp, showQuit bool
}
Agent ↔ TUI bridge: setupSubscriptions() creates goroutines subscribing
to pubsub.Broker channels. Events forwarded to chan tea.Msg for Bubble Tea.
Auto-compaction trigger:
if tokens >= int64(float64(contextWindow)*0.95) && config.Get().AutoCompact {
return a, util.CmdHandler(startCompactSessionMsg{})
}
Agent Types (Go)
| Agent | Purpose | Tools | Tokens |
|---|---|---|---|
AgentCoder | Main coding | Full | Model default |
AgentTask | Sub-task exec | Full | Separate context |
AgentTitle | Title generation | None | 80 max |
AgentSummarizer | Auto-compaction | None | Summarize |
Key Difference: Go has NO client/server split
The Go version is monolithic — TUI and agent run in-process, communicating
via pubsub.Broker channels. Non-interactive mode (--prompt) runs the
agent directly. A client/server split would need to be added on top.
APPENDIX B: TypeScript Plugin Hook Events
The TS version's plugin system has 30+ hook events. These map to the lifecycle points where you'd want extensibility:
Tool lifecycle:
tool.execute.before # Intercept before tool runs
tool.execute.after # Post-process tool results
Session lifecycle:
session.created # New session started
session.compacted # Context was compacted
session.idle # Agent finished working
session.error # Error occurred
Message lifecycle:
message.updated # Message content changed
message.part.updated # Message part changed
Permission lifecycle:
permission.asked # Permission prompt shown
permission.replied # User responded
File lifecycle:
file.edited # File was modified
file.watcher.updated # File system change detected
Shell:
shell.env # Inject environment variables
LSP:
lsp.client.diagnostics # Diagnostic results
lsp.updated # LSP state changed
TUI:
tui.prompt.append # Add to prompt
tui.command.execute # Command invoked
tui.toast.show # Show notification
Other:
command.executed # CLI command ran
experimental.session.compacting # Modify compaction prompt