Skip to content
Home
GitHub

CLI Reference

Interactive workbench for developing and testing Experts.

Terminal window
perstack start [expertKey] [query] [options]

Arguments:

  • [expertKey]: Expert key (optional — prompts if not provided)
  • [query]: Input query (optional — prompts if not provided)

Opens a text-based UI for iterating on Expert definitions. See Running Experts.

Headless execution for production and automation.

Terminal window
perstack run <expertKey> <query> [options]

Arguments:

  • <expertKey>: Expert key (required)
    • Examples: my-expert, @org/my-expert, @org/my-expert@1.0.0
  • <query>: Input query (required)

Outputs JSON events to stdout.

Both start and run accept the same options:

OptionDescriptionDefault
--provider <provider>LLM provideranthropic
--model <model>Model nameclaude-sonnet-4-5

Providers: anthropic, google, openai, deepseek, ollama, azure-openai, amazon-bedrock, google-vertex

OptionDescriptionDefault
--max-steps <n>Maximum total steps across all Runs in a Job100
--max-retries <n>Max retry attempts per generation5
--timeout <ms>Timeout per generation (ms)300000
OptionDescriptionDefault
--reasoning-budget <budget>Reasoning budget for native LLM reasoning (minimal, low, medium, high, or token count)-
OptionDescriptionDefault
--config <path>Path to perstack.tomlAuto-discover from cwd
--env-path <path...>Environment file paths.env, .env.local
OptionDescription
--job-id <id>Custom Job ID for new Job (default: auto-generated)
--continueContinue latest Job with new Run
--continue-job <id>Continue specific Job with new Run
--resume-from <id>Resume from specific checkpoint (requires --continue-job)

Combining options:

Terminal window
# Continue latest Job from its latest checkpoint
--continue
# Continue specific Job from its latest checkpoint
--continue-job <jobId>
# Continue specific Job from a specific checkpoint
--continue-job <jobId> --resume-from <checkpointId>

Note: --resume-from requires --continue-job (Job ID must be specified). You can only resume from the Coordinator Expert’s checkpoints.

OptionDescription
-i, --interactive-tool-call-resultTreat query as interactive tool call result

Use with --continue to respond to interactive tool calls from the Coordinator Expert.

OptionDescription
--filter <types>Filter events by type (comma-separated, e.g., completeRun,stopRunByError)
OptionDescription
--verboseEnable verbose logging (see Verbose Mode)

The --verbose flag enables detailed logging for debugging purposes, showing additional runtime information in the output.

Terminal window
# Basic execution (creates new Job)
npx perstack run my-expert "Review this code"
# With model options
npx perstack run my-expert "query" \
--provider google \
--model gemini-2.5-pro \
--max-steps 100
# Continue Job with follow-up
npx perstack run my-expert "initial query"
npx perstack run my-expert "follow-up" --continue
# Continue specific Job from latest checkpoint
npx perstack run my-expert "continue" --continue-job job_abc123
# Continue specific Job from specific checkpoint
npx perstack run my-expert "retry with different approach" \
--continue-job job_abc123 \
--resume-from checkpoint_xyz
# Custom Job ID for new Job
npx perstack run my-expert "query" --job-id my-custom-job
# Respond to interactive tool call
npx perstack run my-expert "user response" --continue -i
# Custom config
npx perstack run my-expert "query" \
--config ./configs/production.toml \
--env-path .env.production
# Registry Experts
npx perstack run tic-tac-toe "Let's play!"
npx perstack run @org/expert@1.0.0 "query"

View execution history and events for debugging.

Terminal window
perstack log [options]

Purpose:

Inspect job/run execution history and events for debugging. This command is designed for both human inspection and AI agent usage, making it easy to diagnose issues in Expert runs.

Default Behavior:

When called without options, shows a summary of the latest job with:

  • “(showing latest job)” indicator when no --job specified
  • “Storage: ” showing where data is stored
  • Maximum 100 events (use --take 0 for all)

Options:

OptionDescription
--job <jobId>Show events for a specific job
--run <runId>Show events for a specific run
--checkpoint <id>Show checkpoint details
--step <step>Filter by step number (e.g., 5, >5, 1-10)
--type <type>Filter by event type
--errorsShow only error-related events
--toolsShow only tool call events
--delegationsShow only delegation events
--filter <expression>Simple filter expression
--jsonOutput as JSON (machine-readable)
--prettyPretty-print JSON output
--verboseShow full event details
--take <n>Number of events to display (default: 100, 0 for all)
--offset <n>Number of events to skip (default: 0)
--context <n>Include N events before/after matches
--messagesShow message history for checkpoint
--summaryShow summarized view

Event Types:

Event TypeDescription
startRunRun started
callToolsTool calls made
resolveToolResultsTool results received
callDelegateDelegation to another expert
stopRunByErrorError occurred
retryGeneration retry
completeRunRun completed
continueToNextStepStep transition

Filter Expression Syntax:

Simple conditions are supported:

Terminal window
# Exact match
--filter '.type == "completeRun"'
# Numeric comparison
--filter '.stepNumber > 5'
--filter '.stepNumber >= 5'
--filter '.stepNumber < 10'
# Array element matching
--filter '.toolCalls[].skillName == "base"'

Step Range Syntax:

Terminal window
--step 5 # Exact step number
--step ">5" # Greater than 5
--step ">=5" # Greater than or equal to 5
--step "1-10" # Range (inclusive)

Examples:

Terminal window
# Show latest job summary
perstack log
# Show all events for a specific job
perstack log --job abc123
# Show events for a specific run
perstack log --run xyz789
# Show checkpoint details with messages
perstack log --checkpoint cp123 --messages
# Show only errors
perstack log --errors
# Show tool calls for steps 5-10
perstack log --tools --step "5-10"
# Filter by event type
perstack log --job abc123 --type callTools
# JSON output for automation
perstack log --job abc123 --json
# Error diagnosis with context
perstack log --errors --context 5
# Filter with expression
perstack log --filter '.toolCalls[].skillName == "base"'
# Summary view
perstack log --summary

Output Format:

Terminal output (default) shows human-readable format with colors:

Job: abc123 (completed)
Expert: my-expert@1.0.0
Started: 2024-12-23 10:30:15
Steps: 12
Events:
─────────────────────────────────────────────
[Step 1] startRun 10:30:15
Expert: my-expert@1.0.0
Query: "Analyze this code..."
[Step 2] callTools 10:30:18
Tools: read_file, write_file
[Step 3] resolveToolResults 10:30:22
✓ read_file: Success
✗ write_file: Permission denied
─────────────────────────────────────────────

JSON output (--json) for machine parsing:

{
"job": { "id": "abc123", "status": "completed" },
"events": [
{ "type": "startRun", "stepNumber": 1 }
],
"summary": {
"totalEvents": 15,
"errorCount": 0
}
}

Pre-collect tool definitions to enable instant LLM inference.

Terminal window
perstack install [options]

Purpose:

By default, Perstack initializes MCP skills at runtime to discover their tool definitions. This can add 500ms-6s startup latency per skill. perstack install solves this by:

  1. Initializing all skills once and collecting their tool schemas
  2. Caching the schemas in a perstack.lock file
  3. Enabling the runtime to start LLM inference immediately using cached schemas
  4. Deferring actual MCP connections until tools are called

Options:

OptionDescriptionDefault
--config <path>Path to perstack.tomlAuto-discover from cwd
--env-path <path...>Environment file paths.env, .env.local

Example:

Terminal window
# Generate lockfile for current project
perstack install
# Generate lockfile for specific config
perstack install --config ./configs/production.toml
# Re-generate after adding new skills
perstack install

Output:

Creates perstack.lock in the same directory as perstack.toml. This file contains:

  • All expert definitions (including resolved delegates from registry)
  • All tool definitions for each expert’s skills

When to run:

  • After adding or modifying skills in perstack.toml
  • After updating MCP server dependencies
  • Before deploying to production for faster startup

Note: The lockfile is optional. If not present, skills are initialized at runtime as usual.