Skip to Content
PatternsOrchestrator Pattern

The Orchestrator Pattern

Build an orchestrator agent that creates worker agents on demand, passes them tasks via webhooks, collects results through callback URLs, and tears them down when done.

This is the most powerful multi-agent pattern — fully automated agent lifecycle management.

Orchestrator → Worker data flow
🎯
Orchestrator
⚙️
Worker
1
Create worker cage
Orchestrator uses the API to spin up an isolated worker
lobster create research-worker --size starter
2
Configure task + callback
Inject the task payload and callback URL as env vars
lobster env set research-worker TASK="Analyze Q3 data" CALLBACK_URL="https://gw.../hook/{orch}/results"
3
Send task via webhook
POST task details to the worker's webhook URL
POST /hook/{workerId}/{token}/task → { taskId, payload, callbackUrl }
4
Worker processes
Worker wakes, processes the task in its isolated environment
Worker: AI analyzes data, generates report...
5
Callback with results
Worker POSTs results back to the orchestrator's callback URL
POST {callbackUrl} → { taskId, status: "completed", result: {...} }
6
Cleanup
Orchestrator collects results and destroys the worker
lobster destroy research-worker → 4 credits used
Total cost: Worker ran for 4 minutes = 4 credits of your plan allowance
Fully isolated execution. No shared state. Automatic cleanup.

How it works

The orchestrator is itself a LobsterCage cage. It uses the REST API or CLI to manage other cages programmatically. Workers are ephemeral — created for a specific task, destroyed when done.

The communication layer uses webhooks and callback URLs:

  1. The orchestrator creates a worker cage and retrieves its webhook URL
  2. It sends a task to the worker via an HTTP POST to the webhook URL
  3. The task payload includes a callback URL pointing back to the orchestrator’s own webhook
  4. The worker processes the task, then POSTs results to the callback URL
  5. The orchestrator receives the results and destroys the worker

Every cage gets a webhook URL automatically at creation. No additional setup required.

The complete data flow

Step 1: Orchestrator creates a worker

Using the CLI (available inside any cage):

lobster create research-worker --size starter

Or the REST API:

curl -X POST https://api.lobstercage.ai/v1/cages \ -H "Authorization: Bearer $LOBSTERCAGE_API_KEY" \ -H "Content-Type: application/json" \ -d '{"name": "research-worker", "size": "starter"}'

Response:

{ "id": "cage_a1b2c3", "name": "research-worker", "status": "pending", "webhookUrl": "https://gateway.lobstercage.ai/hook/cage_a1b2c3/tok_x9y8z7/" }

The webhookUrl is how the orchestrator will send tasks to this worker.

Step 2: Configure the worker

Inject the task and callback URL as environment variables:

lobster env set research-worker \ TASK="Analyze Q3 revenue data from the provided dataset" \ CALLBACK_URL="https://gateway.lobstercage.ai/hook/cage_orch01/tok_m4n5o6/results" \ ORCHESTRATOR_ID="cage_orch01" \ ANTHROPIC_API_KEY=sk-ant-...

The CALLBACK_URL points to the orchestrator’s own webhook URL with a /results path suffix so the orchestrator can distinguish callbacks from other inbound requests.

Step 3: Start the worker and send a task

Start the cage, then POST the task payload:

lobster start research-worker

Once the worker is running, send the detailed task via webhook:

POST https://gateway.lobstercage.ai/hook/cage_a1b2c3/tok_x9y8z7/task Content-Type: application/json { "taskId": "task_001", "type": "research", "payload": { "query": "Analyze Q3 revenue trends", "dataUrl": "https://storage.example.com/q3-data.csv", "outputFormat": "summary_with_charts" }, "callbackUrl": "https://gateway.lobstercage.ai/hook/cage_orch01/tok_m4n5o6/results" }

If the worker hasn’t finished booting yet, the gateway proxy buffers the request and delivers it once the worker is ready. The orchestrator gets an immediate 202 Accepted response.

Step 4: Worker processes the task

The worker cage boots, OpenClaw starts, and receives the task via its webhook handler. It:

  1. Reads the task payload
  2. Downloads the dataset from dataUrl
  3. Calls the AI provider to analyze the data
  4. Generates the requested output

Step 5: Worker calls back with results

When processing is complete, the worker POSTs results to the callback URL:

POST https://gateway.lobstercage.ai/hook/cage_orch01/tok_m4n5o6/results Content-Type: application/json { "taskId": "task_001", "status": "completed", "result": { "summary": "Q3 revenue increased 23% YoY, driven primarily by...", "highlights": [ "Enterprise segment grew 45%", "Churn decreased to 2.1%", "Average deal size up 18%" ], "chartData": { ... } }, "workerId": "cage_a1b2c3", "durationSeconds": 47 }

If the orchestrator happens to be hibernated when the callback arrives, the gateway buffers the result and wakes the orchestrator — the same mechanism that handles all webhook delivery in LobsterCage.

Step 6: Orchestrator collects and cleans up

The orchestrator receives the callback, stores the result, and destroys the worker:

lobster destroy research-worker
Cage research-worker destroyed

Total cost for this task: 4 credits for 4 minutes of worker runtime. On a Builder plan ($49/month, 20,000 credits), that’s a tiny fraction of your monthly allowance.

Working example: parallel research

An orchestrator that decomposes a research question into subtasks, farms them out to workers in parallel, and synthesizes the results.

// orchestrator/index.js — runs inside the orchestrator cage import { LobsterCageClient } from './lib/client.js' const lc = new LobsterCageClient(process.env.LOBSTERCAGE_API_KEY) const SELF_WEBHOOK = process.env.LOBSTERCAGE_WEBHOOK_BASE_URL export async function handleTask(task) { // Decompose the research question into subtasks const subtasks = await decomposeQuestion(task.query) // Create a worker for each subtask const workers = await Promise.all( subtasks.map(async (subtask, i) => { const cage = await lc.createCage({ name: `research-${task.taskId}-${i}`, size: 'starter', }) await lc.setEnv(cage.id, { ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY, TASK: JSON.stringify(subtask), CALLBACK_URL: `${SELF_WEBHOOK}/results`, }) await lc.startCage(cage.id) // Send detailed task via webhook await fetch(`${cage.webhookUrl}/task`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ taskId: `${task.taskId}-${i}`, parentTaskId: task.taskId, payload: subtask, callbackUrl: `${SELF_WEBHOOK}/results`, }), }) return { id: cage.id, name: cage.name, subtask } }) ) // Track workers — results arrive via callback webhooks await saveWorkerManifest(task.taskId, workers) } export async function handleResult(result) { // Store the result await saveSubtaskResult(result.parentTaskId, result) // Check if all subtasks are done const manifest = await getWorkerManifest(result.parentTaskId) const results = await getAllResults(result.parentTaskId) if (results.length === manifest.workers.length) { // All done — synthesize and deliver const synthesis = await synthesizeResults(results) await deliverFinalResult(result.parentTaskId, synthesis) // Clean up all workers await Promise.all( manifest.workers.map(w => lc.destroyCage(w.id)) ) } }

Scaling considerations

Isolation: Each worker cage is fully isolated. One worker crashing, hanging, or running malicious code cannot affect the orchestrator or other workers.

Parallelism: Create as many workers as your plan allows (3 on Starter, 10 on Builder, 25 on Team). Workers run concurrently — a 10-subtask research job with 10 parallel workers takes as long as the slowest subtask, not the sum.

Variable sizing: Match worker size to the task. Use starter for lightweight research, standard for code execution, power for data processing. The orchestrator can choose dynamically.

Cost control: Each worker only runs for the duration of its task. A 4-minute research task costs 4 credits. The orchestrator can set a timeout and destroy workers that run too long:

// Kill workers that take more than 10 minutes setTimeout(() => lc.destroyCage(worker.id), 10 * 60 * 1000)

Security model

  • No shared credentials: Each worker gets only the secrets it needs for its specific task. The orchestrator’s API keys are never exposed to workers.
  • Full cage isolation: Workers can’t access the orchestrator’s filesystem, network, or environment. Communication happens exclusively through webhooks.
  • Webhook token security: Each cage’s webhook URL includes a unique token. Workers can only send callbacks to URLs the orchestrator explicitly provides.
  • Scoped API keys: The orchestrator can use an API key scoped to cages:write — enough to create and manage workers, but not to access billing or account settings.

Cost analysis

All costs are in credits (1 credit = 1 minute of Starter compute). See billing for plan pricing.

ScenarioWorkersAvg runtimeCredits used
Simple research13 min3
Parallel research (5 subtasks)54 min each20
Data processing pipeline310 min each30
Code review + test28 min each16

Plus the orchestrator’s own runtime — but since it’s mostly waiting for callbacks, it can hibernate between tasks too. A Builder plan (20,000 credits/month) can handle hundreds of orchestrated tasks.

What’s next

Last updated on