The Orchestrator Pattern
Build an orchestrator agent that creates worker agents on demand, passes them tasks via webhooks, collects results through callback URLs, and tears them down when done.
This is the most powerful multi-agent pattern — fully automated agent lifecycle management.
How it works
The orchestrator is itself a LobsterCage cage. It uses the REST API or CLI to manage other cages programmatically. Workers are ephemeral — created for a specific task, destroyed when done.
The communication layer uses webhooks and callback URLs:
- The orchestrator creates a worker cage and retrieves its webhook URL
- It sends a task to the worker via an HTTP POST to the webhook URL
- The task payload includes a callback URL pointing back to the orchestrator’s own webhook
- The worker processes the task, then POSTs results to the callback URL
- The orchestrator receives the results and destroys the worker
Every cage gets a webhook URL automatically at creation. No additional setup required.
The complete data flow
Step 1: Orchestrator creates a worker
Using the CLI (available inside any cage):
lobster create research-worker --size starterOr the REST API:
curl -X POST https://api.lobstercage.ai/v1/cages \
-H "Authorization: Bearer $LOBSTERCAGE_API_KEY" \
-H "Content-Type: application/json" \
-d '{"name": "research-worker", "size": "starter"}'Response:
{
"id": "cage_a1b2c3",
"name": "research-worker",
"status": "pending",
"webhookUrl": "https://gateway.lobstercage.ai/hook/cage_a1b2c3/tok_x9y8z7/"
}The webhookUrl is how the orchestrator will send tasks to this worker.
Step 2: Configure the worker
Inject the task and callback URL as environment variables:
lobster env set research-worker \
TASK="Analyze Q3 revenue data from the provided dataset" \
CALLBACK_URL="https://gateway.lobstercage.ai/hook/cage_orch01/tok_m4n5o6/results" \
ORCHESTRATOR_ID="cage_orch01" \
ANTHROPIC_API_KEY=sk-ant-...The CALLBACK_URL points to the orchestrator’s own webhook URL with a /results path suffix so the orchestrator can distinguish callbacks from other inbound requests.
Step 3: Start the worker and send a task
Start the cage, then POST the task payload:
lobster start research-workerOnce the worker is running, send the detailed task via webhook:
POST https://gateway.lobstercage.ai/hook/cage_a1b2c3/tok_x9y8z7/task
Content-Type: application/json
{
"taskId": "task_001",
"type": "research",
"payload": {
"query": "Analyze Q3 revenue trends",
"dataUrl": "https://storage.example.com/q3-data.csv",
"outputFormat": "summary_with_charts"
},
"callbackUrl": "https://gateway.lobstercage.ai/hook/cage_orch01/tok_m4n5o6/results"
}If the worker hasn’t finished booting yet, the gateway proxy buffers the request and delivers it once the worker is ready. The orchestrator gets an immediate 202 Accepted response.
Step 4: Worker processes the task
The worker cage boots, OpenClaw starts, and receives the task via its webhook handler. It:
- Reads the task payload
- Downloads the dataset from
dataUrl - Calls the AI provider to analyze the data
- Generates the requested output
Step 5: Worker calls back with results
When processing is complete, the worker POSTs results to the callback URL:
POST https://gateway.lobstercage.ai/hook/cage_orch01/tok_m4n5o6/results
Content-Type: application/json
{
"taskId": "task_001",
"status": "completed",
"result": {
"summary": "Q3 revenue increased 23% YoY, driven primarily by...",
"highlights": [
"Enterprise segment grew 45%",
"Churn decreased to 2.1%",
"Average deal size up 18%"
],
"chartData": { ... }
},
"workerId": "cage_a1b2c3",
"durationSeconds": 47
}If the orchestrator happens to be hibernated when the callback arrives, the gateway buffers the result and wakes the orchestrator — the same mechanism that handles all webhook delivery in LobsterCage.
Step 6: Orchestrator collects and cleans up
The orchestrator receives the callback, stores the result, and destroys the worker:
lobster destroy research-workerCage research-worker destroyedTotal cost for this task: 4 credits for 4 minutes of worker runtime. On a Builder plan ($49/month, 20,000 credits), that’s a tiny fraction of your monthly allowance.
Working example: parallel research
An orchestrator that decomposes a research question into subtasks, farms them out to workers in parallel, and synthesizes the results.
// orchestrator/index.js — runs inside the orchestrator cage
import { LobsterCageClient } from './lib/client.js'
const lc = new LobsterCageClient(process.env.LOBSTERCAGE_API_KEY)
const SELF_WEBHOOK = process.env.LOBSTERCAGE_WEBHOOK_BASE_URL
export async function handleTask(task) {
// Decompose the research question into subtasks
const subtasks = await decomposeQuestion(task.query)
// Create a worker for each subtask
const workers = await Promise.all(
subtasks.map(async (subtask, i) => {
const cage = await lc.createCage({
name: `research-${task.taskId}-${i}`,
size: 'starter',
})
await lc.setEnv(cage.id, {
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY,
TASK: JSON.stringify(subtask),
CALLBACK_URL: `${SELF_WEBHOOK}/results`,
})
await lc.startCage(cage.id)
// Send detailed task via webhook
await fetch(`${cage.webhookUrl}/task`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
taskId: `${task.taskId}-${i}`,
parentTaskId: task.taskId,
payload: subtask,
callbackUrl: `${SELF_WEBHOOK}/results`,
}),
})
return { id: cage.id, name: cage.name, subtask }
})
)
// Track workers — results arrive via callback webhooks
await saveWorkerManifest(task.taskId, workers)
}
export async function handleResult(result) {
// Store the result
await saveSubtaskResult(result.parentTaskId, result)
// Check if all subtasks are done
const manifest = await getWorkerManifest(result.parentTaskId)
const results = await getAllResults(result.parentTaskId)
if (results.length === manifest.workers.length) {
// All done — synthesize and deliver
const synthesis = await synthesizeResults(results)
await deliverFinalResult(result.parentTaskId, synthesis)
// Clean up all workers
await Promise.all(
manifest.workers.map(w => lc.destroyCage(w.id))
)
}
}Scaling considerations
Isolation: Each worker cage is fully isolated. One worker crashing, hanging, or running malicious code cannot affect the orchestrator or other workers.
Parallelism: Create as many workers as your plan allows (3 on Starter, 10 on Builder, 25 on Team). Workers run concurrently — a 10-subtask research job with 10 parallel workers takes as long as the slowest subtask, not the sum.
Variable sizing: Match worker size to the task. Use starter for lightweight research, standard for code execution, power for data processing. The orchestrator can choose dynamically.
Cost control: Each worker only runs for the duration of its task. A 4-minute research task costs 4 credits. The orchestrator can set a timeout and destroy workers that run too long:
// Kill workers that take more than 10 minutes
setTimeout(() => lc.destroyCage(worker.id), 10 * 60 * 1000)Security model
- No shared credentials: Each worker gets only the secrets it needs for its specific task. The orchestrator’s API keys are never exposed to workers.
- Full cage isolation: Workers can’t access the orchestrator’s filesystem, network, or environment. Communication happens exclusively through webhooks.
- Webhook token security: Each cage’s webhook URL includes a unique token. Workers can only send callbacks to URLs the orchestrator explicitly provides.
- Scoped API keys: The orchestrator can use an API key scoped to
cages:write— enough to create and manage workers, but not to access billing or account settings.
Cost analysis
All costs are in credits (1 credit = 1 minute of Starter compute). See billing for plan pricing.
| Scenario | Workers | Avg runtime | Credits used |
|---|---|---|---|
| Simple research | 1 | 3 min | 3 |
| Parallel research (5 subtasks) | 5 | 4 min each | 20 |
| Data processing pipeline | 3 | 10 min each | 30 |
| Code review + test | 2 | 8 min each | 16 |
Plus the orchestrator’s own runtime — but since it’s mostly waiting for callbacks, it can hibernate between tasks too. A Builder plan (20,000 credits/month) can handle hundreds of orchestrated tasks.
What’s next
- Single agent setup — if you haven’t deployed your first agent yet
- Multi-agent direct — the simpler pattern where you interact with each agent directly
- API reference — full REST API documentation for programmatic cage management
- Webhook concepts — deep dive on the gateway proxy, buffering, and wake-on-webhook