How to friction-max your learning with software engineer Jessica Rose [Podcast #216]
TL;DR
Jessica Rose discusses founding the Bad Website Club, a free bootcamp created to combat predatory $40,000 coding programs, and explains why 'friction-maxing' your learning—embracing difficulty and building imperfect things manually—creates better developers than relying on AI shortcuts or overhyped frameworks.
🎯 The Bad Website Club & Friction-Maxing Learning 3 insights
Bootcamp born from spite against scams
Jessica launched the free Bad Website Club after meeting learners crushed by debt from scammy bootcamps, aiming to steal money from predatory programs by offering quality education at zero cost.
Embrace difficulty to build critical thinking
She advocates for 'friction-maxing' learning by resisting AI shortcuts and LLMs, comparing over-reliance on generated code to using a teacher's answer key instead of doing the work that develops understanding.
Build silly, bad websites without pressure
The curriculum focuses on creating 'bad websites' using semantic HTML and basic tools, rejecting the pressure to master every new framework while fostering creativity through low-stakes experimentation.
⚠️ Tech Industry Critique 3 insights
Tool fatigue overwhelming learners
After decades in web development, Rose expresses exhaustion with the 'relentless and confident tone' of framework hype that dumps impossible learning lists on beginners who simply want to build websites.
AI as the monkey's paw of development
She describes LLMs as a 'monkey paw' wish come true—wanting easier website building resulted in 'spicy autocomplete' that does the thinking for you, undermining the comprehension that comes from manual creation.
Consent matters in AI training data
Through her work on Mozilla's Common Voice project, she emphasizes the importance of consent-based datasets over web scraping, advocating for selective AI use that engages critical thinking rather than replacing it.
🌱 From Survival to Software 2 insights
Escaping generational adversity through luck
As the only sibling to survive past 40 in a difficult family situation, Rose credits university access for qualifying her to teach in Japan, creating geographic and economic distance that enabled her career change.
Japan transition revealed tech salaries
While teaching in Osaka and Kyoto through the JET program, she discovered programmers earned double her salary for 'just different words,' prompting her shift from technical writing to software engineering.
Bottom Line
Embrace the difficulty of learning by building small, imperfect projects manually rather than outsourcing your thinking to AI or drowning in framework hype.
More from freeCodeCamp.org
View all
System Design Course – APIs, Databases, Caching, CDNs, Load Balancing & Production Infra
This course outlines the architectural mindset shift required to advance from mid-level to senior engineering, covering foundational system components from single-server setups to database selection (SQL vs. NoSQL) and scaling strategies (vertical vs. horizontal with load balancing).
OpenAI Codex Essentials – AI Coding Agent
Andrew Brown from Exam Pro delivers a certification course on OpenAI Codex, an agentic CLI coding tool that automates software development through an internal agentic loop of model inference and tool calls, preparing learners for the EXP-CODEX01 exam with practical, hands-on training rather than theoretical overview.
How to learn programming and CS in the AI hype era – interview with prof Mark Mahoney [Podcast #215]
Dr. Mark Mahoney argues that while LLMs excel at low-stakes prototyping and visualizations, learning programming fundamentals through manual debugging remains essential to avoid technical debt and build resilient engineering skills that persist regardless of tool availability or cost.
CUDA Programming for NVIDIA H100s – Comprehensive Course
This comprehensive 24-hour course teaches advanced CUDA programming for NVIDIA H100 Hopper GPUs, covering asynchronous execution models, Tensor Memory Accelerator operations, WGMMA pipelines, and multi-GPU scaling strategies necessary for training trillion-parameter AI models.