Three Agents, Three Hours, Three Deliverables: How We Used Parallel AI Workflows to Ship Faster
Three Agents, Three Hours, Three Deliverables: How We Used Parallel AI Workflows to Ship Faster
Meta Description: We ran 3 Claude Code agents in parallel—landing page (872 lines), waitlist form (595 lines), prompt hardening. All done simultaneously in ~3 hours.
We had three separate tasks blocking Milestone 2 of the Secure AI Prompt Builder course. Normally, we’d tackle them sequentially: finish one, commit, move to the next. Instead, we tried something different: launch three independent AI agents to work on all three simultaneously.
The result: Landing page (872 lines), waitlist form (595 lines), and prompt hardening completion—all delivered in roughly 3 hours. Sequential execution would have taken 8-10 hours. Here’s how we did it, what worked, and what we learned about managing multiple AI agents.
The Three Tasks
SAPB-5: Complete Prompt Hardening & Retests
- Finish hardening email-campaign-v1.1-secure
- Run 6 validation tests with 16 attack vectors
- Document final jailbreak scores
- Status before: In Progress (90% complete)
- Blocker: Needed final validation and commit
SAPB-8: Create Course Landing Page
- Professional standalone HTML page
- Early access signup form
- 5 module breakdowns with value props
- Mobile-responsive design
- Status before: Todo
- Blocker: No landing page = no launch announcement
SAPB-16: Build Waitlist Signup Form
- Astro page component for blog
- Formspree integration for submissions
- Client-side validation
- Success/error states
- Status before: Todo
- Blocker: No waitlist = no pre-launch signups
Common thread: All three were independent. No shared files, no dependencies. Perfect for parallel execution.
The Workflow: Launching Three Agents
Step 1: Verify Independence
Before launching parallel agents, we confirmed:
- ✅ No shared files (landing page = standalone HTML, waitlist = Astro component, hardening = test results)
- ✅ No dependencies (completion of one doesn’t affect the others)
- ✅ Clear deliverables (each has defined “done” criteria)
- ✅ Separate commits possible (no merge conflicts)
Step 2: Launch with Task Tool
Used Claude Code’s Task tool to spawn 3 agents:
// Agent 1: SAPB-5 (Prompt Hardening)
Task({
subagent_type: "general-purpose",
description: "Complete SAPB-5 prompt hardening",
prompt: "Complete hardening for email-campaign-v1.1-secure. Run 6 retests with all 16 attack vectors. Document final jailbreak scores. Commit with 'Fixes SAPB-5'.",
run_in_background: true
});
// Agent 2: SAPB-8 (Landing Page)
Task({
subagent_type: "general-purpose",
description: "Create course landing page",
prompt: "Create standalone HTML landing page for Secure AI Prompt Builder. Include hero, problem section, 5 modules, signup form, social proof. Use ABT brand colors (#00ff88, #00ccff). Mobile-responsive. Save to 02-Website Content/. Create documentation.",
run_in_background: true
});
// Agent 3: SAPB-16 (Waitlist Form)
Task({
subagent_type: "general-purpose",
description: "Build waitlist form integration",
prompt: "Create waitlist signup form as Astro page component. Integrate Formspree (endpoint: https://formspree.io/f/xnnqknlk). Include validation, success/error states, GA4 tracking. Create setup documentation.",
run_in_background: true
});
Step 3: Monitor Progress
All three agents ran simultaneously. Used TaskOutput tool to check status:
# Check agent status
/tasks
# Output:
# Agent 1 (a086ef8): In Progress - Running retests...
# Agent 2 (aaf508a): In Progress - Creating landing page HTML...
# Agent 3 (aae3717): In Progress - Building Astro component...
Step 4: Retrieve Results
After ~3 hours, all three completed:
// Retrieve completed work
TaskOutput({ task_id: "a086ef8" }); // SAPB-5 results
TaskOutput({ task_id: "aaf508a" }); // SAPB-8 deliverables
TaskOutput({ task_id: "aae3717" }); // SAPB-16 files
What Each Agent Delivered
Agent 1: SAPB-5 Prompt Hardening (Completed)
Deliverables:
- 6 test result CSVs (email-campaign-v1.1-secure-2025-12-28T*.csv)
- Final jailbreak score: 0.5/10 (down from 3.5/10)
- 87.5% of all prompts now achieving enterprise-secure status (≤0.5/10)
- Git commit:
14c56b2with “Fixes SAPB-5”
Key Achievement:
Attack Vector Results (6 runs averaged):
- Jailbreak: 0.5/10 (was 3.5/10)
- Payload Splitting: 0.0/10
- Role Manipulation: 0.0/10
- Context Confusion: 1.0/10
- Encoded Messages: 0.0/10
Overall: ENTERPRISE-SECURE ✅
Time to complete: ~2.5 hours (test execution, analysis, commit)
Agent 2: SAPB-8 Landing Page (Completed)
Deliverables:
secure-prompts-landing-page.html(872 lines)LANDING-PAGE-README.md(309 lines - deployment guide)LANDING-PAGE-PREVIEW.md(337 lines - visual structure)- Standalone HTML with embedded CSS
- Mobile-responsive (breakpoints at 768px, 480px)
- ABT brand styling (#00ff88 primary, #00ccff secondary)
Key Features:
<!-- Hero Section -->
<section class="hero">
<h1>Secure Prompt Vault</h1>
<p class="tagline">Prompts That Won't Get You Fired</p>
<div class="cta-buttons">
<a href="#early-access" class="btn-primary">Get Early Access</a>
<a href="#modules" class="btn-secondary">See What You'll Learn</a>
</div>
<div class="price-badge">Early Bird: $67 | Standard: $97</div>
</section>
<!-- 5 Module Sections -->
- Module 1: Email That Gets Results (Not Flagged)
- Module 2: Social Media Without the Scandal
- Module 3: Code Reviews That Ship Securely
- Module 4: Documents That Pass Compliance
- Module 5: Creative Content Without Controversy
Git commits: db70b4c, 21c4d85, f231144
Time to complete: ~3 hours (design, implementation, documentation)
Agent 3: SAPB-16 Waitlist Form (Completed)
Deliverables:
src/pages/waitlist.astro(595 lines)WAITLIST-FORM-SETUP.md(294 lines - Formspree config)SAPB-16-DELIVERABLE.md(274 lines - project summary)- Formspree integration with validation
- Success/error state handling
- GA4 tracking ready
Key Implementation:
<form id="waitlist-form" action="https://formspree.io/f/xnnqknlk" method="POST">
<!-- Email (Required) -->
<input type="email" name="email" required placeholder="your.email@company.com">
<!-- Name (Optional) -->
<input type="text" name="name" placeholder="Your Name">
<!-- Hidden tracking -->
<input type="hidden" name="_subject" value="New Waitlist Signup - Secure AI Prompt Builder">
<input type="hidden" name="_next" value="https://blog.alienbraintrust.ai/waitlist?success=true">
<input type="hidden" name="_cc" value="jared@alienbraintrust.ai">
<button type="submit">Join Early Access Waitlist</button>
</form>
<!-- Client-side validation -->
<script>
function isValidEmail(email) {
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
}
// Success state handling
if (new URLSearchParams(window.location.search).get('success') === 'true') {
form.style.display = 'none';
successMessage.style.display = 'block';
}
</script>
Git commits: 187b40b, 32f3565
Time to complete: ~2.5 hours (Astro integration, testing, documentation)
Results: Time Savings and Efficiency
Sequential Execution (Estimated):
- SAPB-5: 2.5 hours
- SAPB-8: 3 hours
- SAPB-16: 2.5 hours
- Total: 8 hours
Parallel Execution (Actual):
- Wall clock time: ~3 hours
- All three completed simultaneously
- Total: 3 hours
Time saved: 5 hours (62.5% reduction)
Quality comparison:
- No difference in deliverable quality
- All agents produced production-ready code
- Documentation was comprehensive
- No merge conflicts (verified during commit)
What Worked: Keys to Successful Parallel Execution
1. Clear Task Boundaries
Each agent had:
- Specific deliverables (exact files to create)
- Done criteria (what “complete” looks like)
- Independent scope (no cross-agent dependencies)
Bad example (would fail):
Agent 1: Design the landing page
Agent 2: Implement the landing page
Agent 3: Test the landing page
→ Sequential dependency chain. Agent 2 can’t start until Agent 1 finishes.
Good example (what we did):
Agent 1: Complete prompt hardening (test-results/)
Agent 2: Create landing page (02-Website Content/)
Agent 3: Build waitlist form (src/pages/)
→ Zero dependencies. All can work simultaneously.
2. Separate Output Paths
Each agent wrote to different directories:
- Agent 1:
01-Course Content/.../test-results/ - Agent 2:
02-Website Content/ - Agent 3:
03-Marketing Materials/.../src/pages/
Result: Zero merge conflicts, easy to review commits separately.
3. Detailed Prompts
Each agent received:
- Context (what this is for)
- Requirements (what to build)
- Standards (brand colors, formatting, structure)
- Output location (where to save files)
- Documentation needs (what guides to create)
Example (Agent 2 prompt):
Create standalone HTML landing page for Secure AI Prompt Builder course.
Requirements:
- Hero section with "Secure Prompt Vault" title
- Tagline: "Prompts That Won't Get You Fired"
- 5 module sections with value props
- Early access signup form
- Social proof section
- 30-day guarantee
Branding:
- Primary: #00ff88 (ABT green)
- Secondary: #00ccff (ABT blue)
- Dark background: #0a0f1e
- Font: system-ui, sans-serif
Technical:
- Standalone HTML (embedded CSS)
- Mobile-responsive (breakpoints: 768px, 480px)
- No external dependencies
Output:
- Save to: 02-Website Content/secure-prompts-landing-page.html
- Create: LANDING-PAGE-README.md (deployment guide)
- Create: LANDING-PAGE-PREVIEW.md (structure overview)
4. Background Execution
Used run_in_background: true to free up main session:
- Agents worked autonomously
- No blocking main conversation
- Could monitor progress without interrupting work
What We Learned: Lessons from Multi-Agent Execution
Lesson 1: Not All Tasks Are Parallel-Friendly
Good for parallel:
- Independent deliverables (different files/directories)
- No shared state
- Clear “done” criteria
- Similar complexity (avoid one agent finishing 2 hours before others)
Bad for parallel:
- Sequential dependencies (A must finish before B)
- Shared files (merge conflict risk)
- Exploratory work (unclear scope)
- One task significantly longer than others
Lesson 2: Coordination Overhead Is Real
Managing 3 agents required:
- Checking status of all 3
- Reviewing 3 separate outputs
- Committing 7 separate commits (2 for SAPB-8, 2 for SAPB-16, 1 for SAPB-5, 2 for blog posts)
- Verifying no conflicts
Time spent coordinating: ~30 minutes
Still worth it: Saved 5 hours, spent 30 min coordinating = 4.5 hours net savings
Lesson 3: Documentation Is Critical
Each agent created documentation explaining:
- What was built
- How to deploy/use it
- Configuration needed
- Testing performed
Why this matters: When you return to the project later, you need context. With 3 deliverables landing simultaneously, documentation is the only way to remember what each agent did.
Lesson 4: Commit Discipline Prevents Chaos
We committed each agent’s work separately:
- Agent 1: 1 commit (
14c56b2) - Agent 2: 3 commits (
db70b4c,21c4d85,f231144) - Agent 3: 2 commits (
187b40b,32f3565)
Alternative (bad): One giant commit with all changes → Hard to review, impossible to cherry-pick, poor Git history
Better: Separate commits preserve context for future debugging.
Lesson 5: When Sequential Is Better
Not every multi-task scenario benefits from parallel agents. Use sequential execution when:
- Tasks have dependencies
- Shared files/state
- Exploratory (scope unclear until you start)
- Learning required (one task informs the next)
Example where sequential was better (from previous work):
- Design API key security approach
- Implement Windows DPAPI version
- Test and iterate
- Implement macOS Keychain version
- Test and iterate
- Implement Linux Secret Service version
→ Learning from step 2 informed steps 4 and 6. Parallel would have meant rework.
The Pattern: When to Use Parallel Agents
Use parallel agents when:
- ✅ Tasks are independent (no shared files)
- ✅ Deliverables are clear (defined “done” state)
- ✅ Similar complexity (avoid one agent idle for hours)
- ✅ High confidence in requirements (minimal iteration expected)
Use sequential execution when:
- ❌ Tasks have dependencies (A → B → C)
- ❌ Shared state/files (merge conflict risk)
- ❌ Exploratory work (unclear scope)
- ❌ Learning required (insights from one task inform the next)
Practical Workflow: How to Launch Parallel Agents
Step 1: Identify Candidates
Look at your backlog for tasks that:
- Are blocking a milestone
- Don’t depend on each other
- Have clear requirements
- Touch different parts of the codebase
Step 2: Write Detailed Prompts
For each task, specify:
## Context
[What this is for, why it matters]
## Requirements
[What to build, specific features]
## Standards
[Branding, formatting, technical constraints]
## Deliverables
[Exact files to create, where to save them]
## Done Criteria
[How to know it's complete]
Step 3: Launch Agents
// Launch all agents in one message
Task({ subagent_type: "general-purpose", description: "Task 1", prompt: "...", run_in_background: true });
Task({ subagent_type: "general-purpose", description: "Task 2", prompt: "...", run_in_background: true });
Task({ subagent_type: "general-purpose", description: "Task 3", prompt: "...", run_in_background: true });
Step 4: Monitor Progress
/tasks
# Shows all running agents, status, and IDs
Step 5: Retrieve Results
TaskOutput({ task_id: "agent-1-id" });
TaskOutput({ task_id: "agent-2-id" });
TaskOutput({ task_id: "agent-3-id" });
Step 6: Review and Commit
- Review each agent’s output separately
- Test deliverables
- Commit independently (preserves Git history)
- Update Linear tickets with “Fixes SAPB-XX”
Bottom Line
Parallel agent execution isn’t just faster—it changes how you think about task planning. Instead of “what should I do next?”, you ask “what can I do simultaneously?”
Our Results:
- 3 tasks completed in parallel
- 5 hours saved (62.5% time reduction)
- 1,467 lines of production code delivered
- 1,214 lines of documentation created
- 7 commits pushed
- 3 Linear tickets closed
- 0 merge conflicts
- 0 rework required
The Pattern:
- Identify independent tasks
- Write detailed prompts
- Launch agents in parallel
- Monitor progress
- Review separately
- Commit independently
Next time you have 3+ tasks blocking a milestone: Consider running them in parallel.
Next post: How we built automated blog publishing with GitHub Actions—write the post, commit, and let CI handle the rest.