CLI Reference
Running Experts
Section titled “Running Experts”perstack start
Section titled “perstack start”Interactive workbench for developing and testing Experts.
perstack start [expertKey] [query] [options]Arguments:
[expertKey]: Expert key (optional — prompts if not provided)[query]: Input query (optional — prompts if not provided)
Opens a text-based UI for iterating on Expert definitions. See Running Experts.
perstack run
Section titled “perstack run”Headless execution for production and automation.
perstack run <expertKey> <query> [options]Arguments:
<expertKey>: Expert key (required)- Examples:
my-expert,@org/my-expert,@org/my-expert@1.0.0
- Examples:
<query>: Input query (required)
Outputs JSON events to stdout.
Shared Options
Section titled “Shared Options”Both start and run accept the same options:
Model and Provider
Section titled “Model and Provider”| Option | Description | Default |
|---|---|---|
--provider <provider> | LLM provider | anthropic |
--model <model> | Model name | claude-sonnet-4-5 |
Providers: anthropic, google, openai, deepseek, ollama, azure-openai, amazon-bedrock, google-vertex
Execution Control
Section titled “Execution Control”| Option | Description | Default |
|---|---|---|
--max-steps <n> | Maximum total steps across all Runs in a Job | 100 |
--max-retries <n> | Max retry attempts per generation | 5 |
--timeout <ms> | Timeout per generation (ms) | 300000 |
Reasoning
Section titled “Reasoning”| Option | Description | Default |
|---|---|---|
--reasoning-budget <budget> | Reasoning budget for native LLM reasoning (minimal, low, medium, high, or token count) | - |
Configuration
Section titled “Configuration”| Option | Description | Default |
|---|---|---|
--config <path> | Path to perstack.toml | Auto-discover from cwd |
--env-path <path...> | Environment file paths | .env, .env.local |
Job and Run Management
Section titled “Job and Run Management”| Option | Description |
|---|---|
--job-id <id> | Custom Job ID for new Job (default: auto-generated) |
--continue | Continue latest Job with new Run |
--continue-job <id> | Continue specific Job with new Run |
--resume-from <id> | Resume from specific checkpoint (requires --continue-job) |
Combining options:
# Continue latest Job from its latest checkpoint--continue
# Continue specific Job from its latest checkpoint--continue-job <jobId>
# Continue specific Job from a specific checkpoint--continue-job <jobId> --resume-from <checkpointId>Note: --resume-from requires --continue-job (Job ID must be specified). You can only resume from the Coordinator Expert’s checkpoints.
Interactive
Section titled “Interactive”| Option | Description |
|---|---|
-i, --interactive-tool-call-result | Treat query as interactive tool call result |
Use with --continue to respond to interactive tool calls from the Coordinator Expert.
Output Filtering (run only)
Section titled “Output Filtering (run only)”| Option | Description |
|---|---|
--filter <types> | Filter events by type (comma-separated, e.g., completeRun,stopRunByError) |
| Option | Description |
|---|---|
--verbose | Enable verbose logging (see Verbose Mode) |
Verbose Mode
Section titled “Verbose Mode”The --verbose flag enables detailed logging for debugging purposes, showing additional runtime information in the output.
Examples
Section titled “Examples”# Basic execution (creates new Job)npx perstack run my-expert "Review this code"
# With model optionsnpx perstack run my-expert "query" \ --provider google \ --model gemini-2.5-pro \ --max-steps 100
# Continue Job with follow-upnpx perstack run my-expert "initial query"npx perstack run my-expert "follow-up" --continue
# Continue specific Job from latest checkpointnpx perstack run my-expert "continue" --continue-job job_abc123
# Continue specific Job from specific checkpointnpx perstack run my-expert "retry with different approach" \ --continue-job job_abc123 \ --resume-from checkpoint_xyz
# Custom Job ID for new Jobnpx perstack run my-expert "query" --job-id my-custom-job
# Respond to interactive tool callnpx perstack run my-expert "user response" --continue -i
# Custom confignpx perstack run my-expert "query" \ --config ./configs/production.toml \ --env-path .env.production
# Registry Expertsnpx perstack run tic-tac-toe "Let's play!"npx perstack run @org/expert@1.0.0 "query"Debugging and Inspection
Section titled “Debugging and Inspection”perstack log
Section titled “perstack log”View execution history and events for debugging.
perstack log [options]Purpose:
Inspect job/run execution history and events for debugging. This command is designed for both human inspection and AI agent usage, making it easy to diagnose issues in Expert runs.
Default Behavior:
When called without options, shows a summary of the latest job with:
- “(showing latest job)” indicator when no
--jobspecified - “Storage:
” showing where data is stored - Maximum 100 events (use
--take 0for all)
Options:
| Option | Description |
|---|---|
--job <jobId> | Show events for a specific job |
--run <runId> | Show events for a specific run |
--checkpoint <id> | Show checkpoint details |
--step <step> | Filter by step number (e.g., 5, >5, 1-10) |
--type <type> | Filter by event type |
--errors | Show only error-related events |
--tools | Show only tool call events |
--delegations | Show only delegation events |
--filter <expression> | Simple filter expression |
--json | Output as JSON (machine-readable) |
--pretty | Pretty-print JSON output |
--verbose | Show full event details |
--take <n> | Number of events to display (default: 100, 0 for all) |
--offset <n> | Number of events to skip (default: 0) |
--context <n> | Include N events before/after matches |
--messages | Show message history for checkpoint |
--summary | Show summarized view |
Event Types:
| Event Type | Description |
|---|---|
startRun | Run started |
callTools | Tool calls made |
resolveToolResults | Tool results received |
callDelegate | Delegation to another expert |
stopRunByError | Error occurred |
retry | Generation retry |
completeRun | Run completed |
continueToNextStep | Step transition |
Filter Expression Syntax:
Simple conditions are supported:
# Exact match--filter '.type == "completeRun"'
# Numeric comparison--filter '.stepNumber > 5'--filter '.stepNumber >= 5'--filter '.stepNumber < 10'
# Array element matching--filter '.toolCalls[].skillName == "base"'Step Range Syntax:
--step 5 # Exact step number--step ">5" # Greater than 5--step ">=5" # Greater than or equal to 5--step "1-10" # Range (inclusive)Examples:
# Show latest job summaryperstack log
# Show all events for a specific jobperstack log --job abc123
# Show events for a specific runperstack log --run xyz789
# Show checkpoint details with messagesperstack log --checkpoint cp123 --messages
# Show only errorsperstack log --errors
# Show tool calls for steps 5-10perstack log --tools --step "5-10"
# Filter by event typeperstack log --job abc123 --type callTools
# JSON output for automationperstack log --job abc123 --json
# Error diagnosis with contextperstack log --errors --context 5
# Filter with expressionperstack log --filter '.toolCalls[].skillName == "base"'
# Summary viewperstack log --summaryOutput Format:
Terminal output (default) shows human-readable format with colors:
Job: abc123 (completed)Expert: my-expert@1.0.0Started: 2024-12-23 10:30:15Steps: 12
Events:─────────────────────────────────────────────[Step 1] startRun 10:30:15 Expert: my-expert@1.0.0 Query: "Analyze this code..."
[Step 2] callTools 10:30:18 Tools: read_file, write_file
[Step 3] resolveToolResults 10:30:22 ✓ read_file: Success ✗ write_file: Permission denied─────────────────────────────────────────────JSON output (--json) for machine parsing:
{ "job": { "id": "abc123", "status": "completed" }, "events": [ { "type": "startRun", "stepNumber": 1 } ], "summary": { "totalEvents": 15, "errorCount": 0 }}Performance Optimization
Section titled “Performance Optimization”perstack install
Section titled “perstack install”Pre-collect tool definitions to enable instant LLM inference.
perstack install [options]Purpose:
By default, Perstack initializes MCP skills at runtime to discover their tool definitions. This can add 500ms-6s startup latency per skill. perstack install solves this by:
- Initializing all skills once and collecting their tool schemas
- Caching the schemas in a
perstack.lockfile - Enabling the runtime to start LLM inference immediately using cached schemas
- Deferring actual MCP connections until tools are called
Options:
| Option | Description | Default |
|---|---|---|
--config <path> | Path to perstack.toml | Auto-discover from cwd |
--env-path <path...> | Environment file paths | .env, .env.local |
Example:
# Generate lockfile for current projectperstack install
# Generate lockfile for specific configperstack install --config ./configs/production.toml
# Re-generate after adding new skillsperstack installOutput:
Creates perstack.lock in the same directory as perstack.toml. This file contains:
- All expert definitions (including resolved delegates from registry)
- All tool definitions for each expert’s skills
When to run:
- After adding or modifying skills in
perstack.toml - After updating MCP server dependencies
- Before deploying to production for faster startup
Note: The lockfile is optional. If not present, skills are initialized at runtime as usual.