My AI Learning Toolkit: Tools and Tech Stack for Building ABT

by Jared Little AI Learning
My AI Learning Toolkit: Tools and Tech Stack for Building ABT

My AI Learning Toolkit: Tools and Tech Stack for Building ABT

If you’re going to teach AI implementation, you better be implementing AI. So here’s my actual tech stack for building Alien Brain Trust - the tools I’m using daily, why I chose them, and the honest truth about what’s working and what’s still experimental.

Primary AI Assistant: Claude (Sonnet 4.5)

What I’m using it for:

  • Course content development and structuring
  • Blog post drafting and editing
  • Code generation and debugging
  • Strategic planning and brainstorming
  • Documentation creation

Why I chose it: After testing ChatGPT, Gemini, and Claude extensively, Claude Sonnet 4.5 consistently gives me the most useful responses for complex, nuanced work. It’s not just about accuracy - it’s about understanding context and maintaining coherence across long conversations.

When I’m developing course modules, I need an AI that can hold context about the overall curriculum while helping me refine individual lessons. Claude excels at that.

Real use case: I used Claude to structure the entire AI-1001 course outline, then drill down into each module’s learning objectives. Instead of just generating content, it helped me think through pedagogical flow - what concepts students need first, where common confusion points might be, how to build practical skills progressively.

What’s working: Long-form content creation, strategic thinking, code assistance What’s not: Sometimes too verbose - I have to edit for conciseness

Development Environment: VS Code + Claude Code Extension

What I’m using it for:

  • Writing and organizing blog content
  • Repository management for ABT materials
  • Code development for web presence
  • Documentation and README files

Why I chose it: I’ve been a VS Code user for years. When I discovered the Claude Code extension, it was a game-changer. Having Claude integrated directly into my development environment means I can:

  • Generate code without context switching
  • Ask questions about my codebase in real-time
  • Get help with markdown formatting and structure
  • Automate repetitive documentation tasks

Real use case: Setting up the labs-journey-blog repository structure. I described what I needed - content categories, folder organization, README documentation - and Claude Code scaffolded the entire thing in minutes. Not just folders, but thoughtful organization that actually makes sense for long-term content management.

What’s working: Seamless workflow integration, context-aware assistance What’s not: Still learning optimal prompt patterns for code generation

Content Creation: Synthesia for Video Production

What I’m using it for:

  • AI-generated video content with lifelike avatar
  • Course video production with my digital clone
  • Professional video content at scale

Why I chose it: Video courses need authentic, engaging presentation. Synthesia allows me to create a lifelike AI avatar clone of myself that can deliver course content consistently. This means I can iterate on scripts, update content easily, and maintain production quality without constant re-recording.

Real use case: I’m creating the AI-1001 course video modules using my Synthesia avatar. Instead of setting up lighting, camera, and doing multiple takes for every script revision, I can refine the content and regenerate videos quickly. The avatar maintains consistency across all modules while I focus on perfecting the teaching content.

What’s working: Professional-looking video output, easy iteration on scripts, consistent quality What’s not: Credit/token management - need to plan content production carefully within my subscription limits

Currently Experimenting With

Notion AI (Content Organization)

Testing for course curriculum management and student resource organization. The AI features help with summarization and content linking, but I’m still evaluating if it’s worth the learning curve versus just using structured markdown files.

Perplexity (Research)

Using this for market research and competitive analysis. Better than traditional search for understanding trends in AI education space. The cited sources make it easier to verify claims and dig deeper.

GitHub Copilot (Code Assistance)

Running it alongside Claude Code to see which one I reach for more naturally. Copilot is faster for autocomplete-style coding. Claude is better for architectural decisions and explaining complex logic.

What I’m Testing Next

Descript - Video editing with AI transcription and editing. Course content needs professional editing, and removing “ums” manually is time I’d rather spend on content development.

Canva AI - For course graphics and social media assets. I need consistent visual branding but don’t have the budget for a dedicated designer yet.

ElevenLabs - Voice synthesis for potential audio versions of written content. Accessibility matters, and some students prefer audio learning.

The Selection Framework

Here’s how I evaluate whether an AI tool makes it into my regular toolkit:

1. Does it solve a real problem? Not “is it cool?” but “does this save time or improve quality in a measurable way?”

2. Does it integrate into existing workflow? Tools that require completely new workflows rarely stick. The best AI tools enhance what I’m already doing.

3. What’s the security posture? Where is my data going? What are the privacy implications? Can I use this with client information? These questions aren’t optional for me.

4. What’s the total cost? Not just subscription fees, but learning curve time and ongoing maintenance. Free tools with steep learning curves can be more expensive than paid tools that just work.

5. Can I teach this? If I can’t explain to students why I chose this tool and how to evaluate it, it’s not strategic - it’s just shiny.

The Honest Assessment

What’s Actually Working:

  • Claude for complex content development and strategic thinking
  • VS Code + Claude Code for integrated development workflow
  • Synthesia for video production (within credit limits)

What’s Still Experimental:

  • Notion AI - useful but not sure it’s essential
  • GitHub Copilot vs Claude Code - still figuring out optimal use cases for each
  • Perplexity - valuable but overlaps with Claude’s capabilities

What’s Not Working:

  • Trying to use every new AI tool that launches (FOMO is real in this space)
  • Expecting AI to replace domain expertise rather than amplify it
  • Using AI for tasks that are faster to do manually

The Real Lesson

The best AI toolkit isn’t about having the most tools. It’s about having the right tools for your specific workflows and knowing when to use them versus when to just do the work yourself.

I’m building Alien Brain Trust with AI, but I’m not outsourcing thinking to AI. These tools accelerate execution once I’ve made strategic decisions. They help me scale content creation without sacrificing quality. They let me test ideas faster.

But the strategy, the pedagogy, the understanding of what students actually need - that still comes from 25 years of real-world experience and genuine understanding of the problem I’m solving.

AI is powerful. But it’s most powerful when wielded by someone who knows what they’re building and why.


My Current Stack (Summary):

  • Primary AI: Claude Sonnet 4.5
  • Development: VS Code + Claude Code extension
  • Content: Synthesia (AI avatar video)
  • Research: Perplexity
  • Testing: Notion AI, GitHub Copilot, evaluating Descript and Canva AI

What I’m learning: Tool selection is strategic, not reactionary. Less is often more.

What’s next: Documenting specific workflows and use cases as I refine this stack for course delivery.

Want to know more about specific tools or how I use them? Drop a question. I’m learning in public, and your questions help shape what I cover next.