AI is a Two-Way Street: Why Fast Feedback Beats Perfect Prompts

by Jared Little AI Learning
AI is a Two-Way Street: Why Fast Feedback Beats Perfect Prompts

AI is a Two-Way Street: Why Fast Feedback Beats Perfect Prompts

The Promise: AI can build features in 15 minutes instead of 4 weeks.

The Reality: AI can build features in 15 minutes if you validate constantly and give feedback quickly.

The Lesson I Learned: AI development isn’t about giving perfect prompts. It’s about rapid iteration and catching issues early.

Here’s what happened when I tried to rely on AI without validating…


The Auto-Publish Incident

What I Asked For: “Create a GitHub Actions workflow that auto-publishes blog posts daily at 9 AM EST.”

What AI Built: A perfectly functional workflow that:

  • ✅ Ran on schedule
  • ✅ Found posts dated for today
  • ✅ Changed published: false to published: true
  • ✅ Committed and pushed changes

What Actually Happened: Posts were marked as published but didn’t show up on the blog.

The Problem I Discovered: The workflow updated the files but didn’t trigger the deployment. GitHub Actions has a security feature that prevents workflows from automatically triggering other workflows.

How Long Until I Found Out: 24 hours - I didn’t discover the issue until the next morning when I checked the blog.


What I Should Have Done

Instead of: “Build it and wait 24 hours to see if it works”

I should have: “Build it, test it immediately, give feedback, iterate”

The Better Workflow:

Step 1: Build (AI does this fast)

Me: "Create auto-publish workflow for 9 AM daily"
AI: [Builds workflow]

Step 2: Validate Immediately (I do this)

Me: "Now let me test this manually..."
[Triggers workflow manually]
[Checks if blog actually updates]

Step 3: Give Feedback (Two-way conversation)

Me: "The workflow ran but the blog didn't update.
     The file changed to published: true, but
     GitHub Pages didn't deploy."

AI: "Ah, GitHub Actions won't trigger workflows from
     workflow commits. Let me add a step to manually
     trigger the deploy workflow..."

Step 4: Validate Again

[Test the updated workflow]
[Confirm blog updates]
✅ Actually works end-to-end

Time difference:

  • My way: 24 hours to discover the problem
  • Better way: 5 minutes to discover and fix

The Pattern I Keep Seeing

This isn’t a one-time thing. Here’s what I’ve noticed across multiple projects:

Scenario 1: Social Sharing Buttons

First iteration:

Me: "Add social sharing buttons for Twitter, LinkedIn, Facebook"
AI: [Builds feature]
Me: [Commits without testing]

Result: Twitter links had double URL encoding (%0A%0A garbage in URLs)

How I found out: Manually tested the next day

What I should have done: Test the actual links immediately


Scenario 2: Blog Post Length

First iteration:

Me: "Write a blog post about finding security vulnerabilities"
AI: [Writes 799 lines]

Result: Post was way too long and repetitive

How I found out: Re-read it the next day

What I should have done: Scan the output immediately, give feedback:

Me: "This is too long and repetitive. Cut it to 400 lines,
     focus on actionable content only."
AI: [Rewrites to 408 lines, much better]

Scenario 3: Auto-Publish Timing

First iteration:

Me: "Schedule for 9 AM EST"
AI: [Sets cron to 13:00 UTC]

Result: Ran at 8:50 AM instead of 9:00 AM (close enough, but not exact)

How I found out: Checked git logs the next morning

What I should have done: Verify the cron schedule immediately:

Me: "Show me the cron schedule you used"
AI: "13:00 UTC"
Me: "That's only correct during daylight saving time.
     Let's discuss the approach..."

The Real Question: Better Prompts or Better Feedback?

My initial belief: “I need to write better prompts so AI gets it right the first time”

What I learned: “I need faster feedback loops so I catch issues in minutes, not days”

Why Better Prompts Aren’t Enough

The perfect prompt myth:

"Create a GitHub Actions workflow that auto-publishes
blog posts at 9 AM EST, ensuring that it triggers
the deployment workflow despite GitHub's security
feature that prevents workflows from triggering
other workflows, and handle daylight saving time
changes, and..."

Problems with this approach:

  1. You’d need to know ALL the edge cases upfront
  2. The prompt becomes a technical specification document
  3. You’re doing the thinking AI should help with
  4. You still need to validate it works

The better approach:

Iteration 1: "Create auto-publish workflow for 9 AM EST"
[Test it]
Iteration 2: "The blog didn't update. Fix the deployment trigger"
[Test it]
Iteration 3: "Perfect! Now let's verify the timezone handling"
[Test it]
✅ Done in 15 minutes with 3 iterations

The Validation Checklist I Now Use

For every AI-generated feature, I immediately:

1. Functional Testing (2 minutes)

Don’t wait. Test NOW.

  • Does it actually work end-to-end?
  • Did I test the happy path?
  • Did I test one edge case?

Example:

  • Auto-publish: Trigger manually, check if blog updates
  • Social buttons: Click each link, verify URLs look correct
  • Dark mode: Toggle it, does theme change?

2. Quick Code Scan (1 minute)

Don’t deep-dive. Just scan for red flags.

  • Any obvious security issues?
  • Hardcoded values that should be variables?
  • Does the approach make sense?

Example:

// RED FLAG: Unescaped newlines in URL
const text = `${title}\n\n${description}`;
const url = `twitter.com?text=${text}`;  // ⚠️ Won't work

// BETTER: Ask AI to fix
Me: "The newlines will break the URL. Fix encoding."

3. Immediate Feedback (30 seconds)

Tell AI what’s wrong right now, not tomorrow.

Bad:

[Commits code]
[Discovers issue next day]
[Opens new conversation]
[Explains context again]

Good:

Me: "I tested it. The Twitter link has %0A characters.
     Looks like double encoding issue."
AI: [Fixes immediately in same conversation]

Why Fast Feedback Loops Matter More Than Perfect Prompts

Time Comparison

Perfect Prompt Approach:

  • 30 minutes crafting detailed prompt
  • AI builds feature (5 minutes)
  • Wait 24 hours to discover issues
  • Total: 24+ hours

Fast Feedback Approach:

  • 1 minute simple prompt
  • AI builds feature (5 minutes)
  • Test immediately (2 minutes)
  • Give feedback (30 seconds)
  • AI fixes (2 minutes)
  • Validate again (2 minutes)
  • Total: 12 minutes

Result: Fast feedback is 120x faster than perfect prompts


The Two-Way Street Mindset

Old mindset (Command & Control):

Me: [Writes perfect 500-word prompt]
AI: [Builds exactly what I specified]
Me: [Ships it]

New mindset (Conversation & Iteration):

Me: "Build X"
AI: [Builds first version]
Me: "Good start, but Y doesn't work. Here's what I saw..."
AI: [Fixes Y]
Me: "Better! Now Z looks off..."
AI: [Adjusts Z]
Me: "Perfect, shipping it"

Why the second approach wins:

  • Faster to “good enough”
  • Catches issues early
  • Teaches AI your preferences
  • Builds better final product

Real Examples from This Week

Monday: Blog Workflow Bug

Without feedback loop: Would’ve discovered after 24 hours With feedback loop: Fixed in 5 minutes Time saved: 23 hours, 55 minutes

Tuesday: Twitter Share Bug

Without feedback loop: Users would report broken links With feedback loop: Caught before shipping Reputation saved: Priceless

Wednesday: Blog Post Length

Without feedback loop: Would’ve published 799-line wall of text With feedback loop: Cut to 408 lines before publishing Reader experience: Much better

Thursday: Image Sizing

Without feedback loop: Would’ve shipped wrong-sized image With feedback loop: You told me, I resized it immediately Iterations: 1 (instant fix)


The Prompts vs Feedback Tradeoff

When to Invest in Better Prompts

Good use of time:

  • Repeatable tasks you’ll do often
  • Complex domain logic AI needs to understand
  • When you have a template library built up

Example:

"Build [FEATURE] with security review, test plan,
and edge case validation using the standard
prompt from my library"

When to Focus on Fast Feedback

Better use of time:

  • One-off features
  • Exploring new approaches
  • When you’re not sure what “right” looks like yet
  • When edge cases aren’t obvious upfront

Example:

"Add social sharing buttons"
[Test the links]
"Twitter link is broken, shows %0A characters"
[AI fixes]
✅ Done

My New Development Loop

Old approach (Slow):

  1. Carefully craft prompt
  2. AI builds feature
  3. Commit and ship
  4. Discover issues later
  5. Fix in new conversation (lost context)

Time to working feature: 1-2 days

New approach (Fast):

  1. Simple, clear prompt
  2. AI builds feature
  3. Test immediately
  4. Give specific feedback
  5. AI iterates
  6. Validate again
  7. Ship when confirmed working

Time to working feature: 10-20 minutes


What This Means for AI Development

The Big Lesson: AI development speed isn’t limited by how fast AI generates code.

It’s limited by how fast YOU validate and give feedback.

The Paradox:

  • AI makes building features 2,688x faster
  • But only if you validate 2,688x faster too

The Solution: Don’t wait a day to discover problems. Test in 2 minutes.


Practical Takeaways

For Developers Using AI:

  1. Test immediately, not eventually

    • Run it as soon as AI generates it
    • Don’t commit first, validate first
  2. Give specific feedback in the same conversation

    • “The Twitter link shows %0A characters when I click it”
    • Not just “social sharing doesn’t work”
  3. Iterate in minutes, not days

    • 3 quick iterations beats 1 “perfect” attempt
  4. Build your validation checklist

    • Functional test
    • Code scan
    • Security check
    • Ship

For Teams Adopting AI:

  1. Encourage rapid testing culture

    • Make it easy to test features quickly
    • Value fast feedback over perfect requirements
  2. Don’t require perfect specs upfront

    • Start with clear intent
    • Iterate based on what AI produces
  3. Measure feedback loop time

    • How long from “built” to “validated”?
    • Goal: Under 5 minutes

The Auto-Publish Fix (Full Story)

Here’s exactly what happened:

Day 1 (Nov 14):

8:50 AM - Auto-publish runs
9:00 AM - I check blog, don't see post
9:30 AM - Manually publish it

Day 2 (Nov 15):

8:50 AM - Auto-publish runs again
9:00 AM - I check blog, STILL don't see post
9:05 AM - Ask Claude to debug
9:10 AM - Discover deployment not triggering
9:12 AM - Claude adds workflow trigger step
9:15 AM - Test it manually
9:17 AM - Confirm blog updates
✅ Fixed

What I learned: If I’d tested the first version immediately instead of waiting 24 hours, I would’ve caught the issue in Day 1 at 9:05 AM instead of Day 2 at 9:05 AM.

Time wasted waiting: 24 hours Time to actually fix: 12 minutes


The Question I Now Ask

Old question: “How do I write the perfect prompt so AI gets it right the first time?”

New question: “How do I validate this in under 2 minutes so I can give AI feedback while we’re still in the same conversation?”

Why this shift matters:

  • Perfect prompts are a myth
  • Fast feedback is reality
  • Iteration beats speculation
  • Working beats theoretically perfect

Your Challenge This Week

Pick one AI-generated feature you’re working on.

Instead of:

  1. Write detailed prompt
  2. AI builds it
  3. Commit and ship
  4. Hope it works

Try this:

  1. Simple prompt
  2. AI builds it
  3. Test it immediately
  4. Give specific feedback
  5. AI fixes
  6. Validate again
  7. Ship only after confirming it works

Time yourself:

  • How long from “AI built it” to “I validated it”?
  • Goal: Under 5 minutes

Track your iterations:

  • How many rounds of feedback?
  • Typical: 2-3 iterations to production-ready

The Bottom Line

AI can build features in 15 minutes.

But only if you validate in 2 minutes.

The bottleneck isn’t AI’s speed anymore.

It’s how fast you catch issues and give feedback.

Don’t wait a day to discover problems you could find in 2 minutes.

AI is a two-way street. Keep the conversation going.


What I’m Changing

Before:

  • Write careful prompts
  • Let AI build
  • Commit and move on
  • Discover issues later
  • Fix in new conversation

After:

  • Write clear (not perfect) prompts
  • Test immediately
  • Give feedback instantly
  • Iterate in same conversation
  • Ship when validated

Result: Finding and fixing issues in minutes instead of days.

That’s the real speed advantage of AI development.

Not how fast AI writes code.

How fast YOU validate and iterate.


P.S. - The auto-publish workflow? Took 5 minutes to build, 24 hours to discover it didn’t fully work, and 12 minutes to fix. If I’d tested it immediately, it would’ve been done in 17 minutes total instead of 24 hours.

Don’t be like me. Test immediately.

Your future self will thank you.