Skip to content
Home
GitHub

Walkthrough

This walkthrough takes you from zero to production integration.

Terminal window
export ANTHROPIC_API_KEY=sk-ant-...

Generate an Expert definition interactively:

Terminal window
npx create-expert "Create a fitness assistant that delegates to a pro trainer"

create-expert does more than scaffold a file — it:

  • generates Expert definitions in perstack.toml based on your description
  • tests them against real-world scenarios
  • analyzes execution history and output to evaluate the definitions
  • iterates on definitions until behavior stabilizes
  • reports capabilities and limitations

The result is a perstack.toml ready to use:

perstack.toml
[experts."fitness-assistant"]
description = "Manages fitness records and suggests training menus"
instruction = """
Conduct interview sessions and manage records in `./fitness-log.md`.
Collaborate with `pro-trainer` for professional training menus.
"""
delegates = ["pro-trainer"]
[experts."pro-trainer"]
description = "Suggests scientifically-backed training menus"
instruction = "Provide split routines and HIIT plans tailored to user history."

You can also write perstack.toml manually — create-expert is a convenient starting point, not a requirement.

Terminal window
npx perstack start fitness-assistant "Start today's session"

perstack start opens a text-based UI for developing and testing Experts. You get real-time feedback and can iterate on definitions without deploying anything.

Terminal window
npx perstack run fitness-assistant "Start today's session"

perstack run outputs JSON events to stdout — designed for automation and CI pipelines.

AspectWhat Perstack Does
StateBoth Experts share the workspace (./fitness-log.md), not conversation history.
Collaborationfitness-assistant delegates to pro-trainer autonomously.
ObservabilityEvery step is visible as a structured event.
IsolationEach Expert has its own context window. No prompt bloat.

After running an Expert, inspect what happened:

Terminal window
npx perstack log

By default, this shows a summary of the latest job — the Expert that ran, the steps it took, and any errors.

Key options for deeper inspection:

OptionPurpose
--errorsShow only error-related events
--toolsShow only tool call events
--step "5-10"Filter by step range
--summaryShow summarized view
--jsonMachine-readable output

This matters because debugging agents across model changes, requirement changes, and prompt iterations requires visibility into every decision the agent made. perstack log gives you that visibility without adding instrumentation code.

See CLI Reference for the full list of options.

Terminal window
npx perstack install

This creates a perstack.lock file that caches tool schemas for all MCP skills. Without the lockfile, Perstack initializes MCP skills at runtime to discover their tool definitions — which can add 500ms–6s startup latency per skill.

Workflow:

  1. Develop without a lockfile — MCP skills are resolved dynamically
  2. Run perstack install before deploying — tool schemas are cached
  3. Deploy with perstack.lock — the runtime starts LLM inference immediately

When to re-run: after adding or modifying skills in perstack.toml, or after updating MCP server dependencies.

The lockfile is optional. If not present, skills are initialized at runtime as usual.

The CLI is for prototyping. For production, integrate Experts into your application via the Execution API, sandbox providers, or runtime embedding.

The Execution API is the primary path for production integration. Your application starts jobs, streams events, and sends follow-up queries over HTTP.

Start a job:

Terminal window
curl -X POST https://api.perstack.ai/api/v1/jobs \
-H "Authorization: Bearer $PERSTACK_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"applicationId": "your-app-id",
"expertKey": "fitness-assistant",
"query": "Start today'\''s session",
"provider": "anthropic"
}'

Stream events (SSE):

Terminal window
curl -N https://api.perstack.ai/api/v1/jobs/{jobId}/stream \
-H "Authorization: Bearer $PERSTACK_API_KEY"

The stream emits Server-Sent Events: message events contain PerstackEvent payloads, error events signal failures, and complete events indicate the job finished.

Continue a job:

Terminal window
curl -X POST https://api.perstack.ai/api/v1/jobs/{jobId}/continue \
-H "Authorization: Bearer $PERSTACK_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "Now create a weekly schedule"
}'
import { createApiClient } from "@perstack/api-client"
const client = createApiClient({
apiKey: process.env.PERSTACK_API_KEY,
})
// Start a job
const result = await client.jobs.start({
applicationId: "your-app-id",
expertKey: "fitness-assistant",
query: "Start today's session",
provider: "anthropic",
})
if (!result.ok) {
// result.error.type: "http" | "network" | "timeout" | "validation" | "abort"
console.error(result.error.message)
process.exit(1)
}
const jobId = result.data.data.job.id
// Stream events
const stream = await client.jobs.stream(jobId)
if (stream.ok) {
for await (const event of stream.data.events) {
console.log(event.type, event)
}
}
// Continue with a follow-up
await client.jobs.continue(jobId, {
query: "Now create a weekly schedule",
})

Every method returns an ApiResult<T> — either { ok: true, data } or { ok: false, error }. Error types are: "http", "network", "timeout", "validation", and "abort".

Perstack’s isolation model maps naturally to container and serverless platforms:

  • Docker
  • AWS ECS
  • Google Cloud Run
  • Kubernetes
  • Cloudflare Workers

Each Expert runs in its own sandboxed environment. See Going to Production for the Docker setup pattern. Detailed guides for other providers are coming soon.

For tighter integration, embed the runtime directly in your TypeScript/JavaScript application:

import { run } from "@perstack/runtime"
const checkpoint = await run({
setting: {
model: "claude-sonnet-4-5-20250929",
providerConfig: { providerName: "anthropic" },
expertKey: "fitness-assistant",
input: { text: "Start today's session" },
},
})

You can also listen for events during execution:

import { run } from "@perstack/runtime"
const checkpoint = await run({
setting: {
model: "claude-sonnet-4-5-20250929",
providerConfig: { providerName: "anthropic" },
expertKey: "fitness-assistant",
input: { text: "Start today's session" },
},
eventListener: (event) => {
console.log(event.type, event)
},
})

The CLI is for prototyping. The runtime API is for production. Both use the same perstack.toml.

Build more:

Understand the architecture:

Reference: