I Shipped a Security Vulnerability in My Social Share Feature (Here's How a Human Caught What AI Missed)
I Shipped a Security Vulnerability in My Social Share Feature (Here’s How a Human Caught What AI Missed)
Last night: I built and shipped a social sharing feature with Claude’s help. Smooth. Fast. Production-ready.
This morning: I tested the Twitter share button. The URL was broken - double-encoded, malformed, with weird %0A%0A characters everywhere.
The realization: Claude helped me build a feature with a security vulnerability and neither of us caught it before release.
What Went Wrong?
I asked Claude to add social sharing buttons to my blog. Here’s what it generated:
// Generate social share text
const twitterText = encodeURIComponent(`${post.data.title}\n\n${post.data.description || ''}`);
// Used in:
href={`https://twitter.com/intent/tweet?text=${twitterText}&url=${encodedUrl}`}
Looks reasonable, right?
Wrong. This code has multiple security and functionality issues:
Issue 1: Double URL Encoding
The description already included the URL context. Then we added &url=${encodedUrl}. Result? The URL appeared twice in the share text.
Issue 2: Unescaped Newlines
\n\n in a URL parameter becomes %0A%0A - visible garbage characters in the Twitter share dialog.
Issue 3: Parameter Pollution
Multiple encoded strings concatenated in URL params create ambiguous parsing scenarios.
Issue 4: Character Limit Issues
Twitter has a 280-character limit. Long title + description + URL = broken shares.
Why Didn’t Claude Catch This?
Here’s the honest answer: I didn’t ask Claude to test it.
My workflow was:
- “Add social sharing buttons”
- Claude generates code
- I review the code (looks fine)
- Build passes ✅
- Ship it
What I DIDN’T do:
- Actually click the buttons
- Inspect the generated URLs
- Test with a real post
- Verify the Twitter intent dialog
- Check for encoding issues
Claude can’t click buttons. I can. And I didn’t.
The Human Discovery Process
This morning, I decided to test the feature:
- Clicked Twitter share button
- Twitter dialog opened with:
Title%0A%0ADescription text here... - Wait, what are those
%0Athings? - Checked the URL: Double-encoded mess
- Asked Claude: “Please scan and fix Twitter link, run security checks”
Claude’s response? Immediate identification and fix.
But here’s the key: Claude could have done this BEFORE shipping if I’d asked it to.
The Real Problem: My Process
I treated Claude like a pair programmer who writes code while I review.
But that’s wrong.
I should treat Claude like a junior developer who needs:
- Requirements
- Testing instructions
- Security review checklist
- Validation criteria
I gave it #1. I skipped #2-4.
What I Should Have Asked
Instead of:
“Add social sharing buttons”
I should have asked:
“Add social sharing buttons. Then:
- Generate test URLs for a sample post
- Verify URL encoding is correct
- Check for security issues (XSS, param pollution, encoding)
- Validate Twitter intent API best practices
- Create a test checklist for me to manually verify”
Claude would have caught the issue.
The Fix (And How Claude Helped)
Once I asked Claude to security-check the code:
Before (Broken):
const twitterText = encodeURIComponent(`${post.data.title}\n\n${post.data.description || ''}`);
href={`https://twitter.com/intent/tweet?text=${twitterText}&url=${encodedUrl}`}
Result:
text=Title%0A%0ADescription&url=https%3A%2F%2F...
After (Fixed):
const twitterText = encodeURIComponent(post.data.title); // Title only, no description
href={`https://twitter.com/intent/tweet?text=${twitterText}&url=${encodedUrl}`}
Result:
text=Title&url=https%3A%2F%2F...
Changes:
- Removed description from text (avoid char limits)
- Removed newlines (no more
%0A) - Single-purpose parameters (text vs url)
- Clean, proper encoding
Time to fix: 2 minutes once I asked Claude to review
The Bigger Lesson: AI Testing Gaps
What AI is GREAT at:
- Writing code quickly
- Following patterns
- Implementing features
- Reviewing code when asked
What AI STRUGGLES with:
- Testing without being told
- Catching edge cases unprompted
- Clicking buttons in a browser
- Thinking “what could go wrong?”
What HUMANS are needed for:
- Actually using the feature
- Defining test scenarios
- Asking “did we test this?”
- Catching UX issues
My New AI Development Workflow
Before (Broken):
1. Ask Claude to build feature
2. Review code
3. Build passes
4. Ship
After (Better):
1. Ask Claude to build feature
2. Ask Claude to generate test plan
3. Ask Claude to security review its own code
4. Claude provides checklist
5. I manually test using checklist
6. Fix any issues
7. Ship
The Standard Testing Checklist I Created
For every feature Claude builds, I now ask it to:
Security Checks:
- URL encoding correct?
- No XSS vulnerabilities?
- No parameter pollution?
- No SQL injection risks?
- No sensitive data leakage?
Functionality Checks:
- Does it work in the browser?
- Does it work with real data?
- Does it work with edge cases (long titles, special chars)?
- Does it work on mobile?
- Does it work in light/dark mode?
Platform-Specific:
- Follows API best practices?
- Handles errors gracefully?
- Character limits respected?
- Proper encoding for platform?
User Experience:
- Is it intuitive?
- Is it accessible?
- Does it look right?
- Does it provide feedback?
How to Prevent This: Prompting for Quality
Instead of:
“Add feature X”
Say:
“Add feature X. Include:
- Implementation
- Security review
- Test plan
- Manual testing checklist
- Edge cases to verify”
Claude will provide all of this. I just need to ask.
Real Example: What I’ll Do Next Time
My prompt for next feature:
Add [feature name] with the following deliverables:
1. Implementation code
2. Security analysis:
- Check for XSS
- Check for injection
- Check for encoding issues
- Check for data leakage
3. Test plan:
- Unit test scenarios
- Integration test scenarios
- Edge cases
4. Manual testing checklist for me to verify
5. Browser testing instructions
After providing all of the above, review your own code
and identify any issues before I ship it.
The Meta-Lesson: Trust But Verify (Again)
This is my second blog post about AI security issues.
Post 1: “I Found 7 Security Flaws in My AI-Generated Blog”
- NPM dependency vulnerabilities
- Lesson: Run security audits
Post 2 (This one): “I Shipped a Security Vulnerability”
- Double encoding in social share
- Lesson: Test before shipping
The pattern? AI generates code fast. But without proper testing and validation, you ship broken or insecure code.
What Changed in 24 Hours
Yesterday:
- I shipped social sharing
- Felt great about shipping fast
- Didn’t test manually
Today:
- Found critical bug
- Fixed in 2 minutes
- Created standard checklist
- Updated my workflow
The improvement: Not just fixing the bug, but systematizing quality assurance into my AI-powered workflow.
The Standard Checklist (Download)
I created a reusable checklist for all future features:
Before Shipping ANY AI-Generated Feature:
-
Ask Claude to Security Review:
Review the code you just wrote for: - Security vulnerabilities - Encoding issues - Edge cases - Best practices Provide a detailed report. -
Ask Claude for Test Plan:
Create a manual testing checklist for this feature. Include edge cases and security checks. -
Manual Testing:
- Test in browser
- Test with real data
- Test edge cases
- Test on mobile
- Test in both themes
-
Security Validation:
- Run NPM audit
- Check for XSS
- Check encoding
- Check API usage
-
Documentation:
- Update docs
- Note known limitations
- Document testing performed
Tools I’m Adding to My Workflow
1. Pre-Commit Hook
# Run before every commit
npm audit
npm run build
npm run test (when I add tests)
2. Claude Security Review Template
Save this prompt for reuse:
Security review the previous code for:
- XSS vulnerabilities
- SQL injection
- URL encoding issues
- Parameter pollution
- Data leakage
- API misuse
Provide specific issues found and fixes.
3. Feature Checklist Template
## Feature: [Name]
### Implementation
- [ ] Code written
- [ ] Security reviewed
- [ ] Edge cases handled
### Testing
- [ ] Manual browser test
- [ ] Real data test
- [ ] Mobile test
- [ ] Theme test
### Security
- [ ] No XSS
- [ ] No injection
- [ ] Proper encoding
- [ ] API best practices
### Ship
- [ ] All checks passed
- [ ] Documented
- [ ] Deployed
The Irony of AI Development
The Problem: AI helps you ship features 10x faster.
The Risk: You also ship bugs 10x faster.
The Solution: Build quality checks into your AI workflow, not after.
What Actually Happened (Timeline)
8:00 PM: “Claude, add social sharing buttons” 8:05 PM: Feature complete, looks great 8:10 PM: Build passes, ship to production 8:15 PM: Feel accomplished, end session
9:00 AM: Test Twitter share button 9:01 AM: “Why does the URL look like garbage?” 9:02 AM: “Claude, please security check Twitter link” 9:04 AM: Issue identified, fixed, committed 9:05 AM: “Why didn’t I test this last night?”
Time to build: 5 minutes Time to ship: 5 minutes Time to test: 0 minutes ❌ Time to fix: 2 minutes Time to learn lesson: Writing this post
Questions This Raises
1. Should AI test its own code?
Current: AI generates code, waits for instructions Better: AI generates code AND test plan automatically Best: AI generates, tests, validates, THEN shows you
2. Who’s responsible for quality?
Answer: Still me. Claude is a tool. I ship the code.
3. How much testing is enough?
Answer: At minimum, actually USE the feature before shipping.
The Action Plan (For You)
If you’re using AI to build features:
Today:
- Review your last 3 AI-generated features
- Actually test them manually
- Check for encoding/security issues
This Week:
- Create a standard testing checklist
- Add security review to your prompts
- Test before shipping
Going Forward:
- Always ask Claude to security-review its own code
- Always ask for a test plan
- Always manually verify before shipping
The Silver Lining
Good news: The vulnerability was caught before real harm.
Better news: Claude fixed it instantly when asked.
Best news: I now have a systematic process to prevent this.
Final Thoughts
Building with AI is incredible. You can ship features in minutes that would take hours or days traditionally.
But speed without quality is just shipping bugs faster.
The solution isn’t to slow down. It’s to build quality into your AI workflow from the start.
Ask Claude to:
- Write the code
- Review the code
- Test the code
- Document the code
Don’t just ask for #1.
Resources:
- Standard Feature Testing Checklist (coming soon)
- Claude Security Review Template (coming soon)
The bug fix: GitHub Commit
Lesson learned: Test your features before shipping, even when AI builds them.
Especially when AI builds them.