What happens when the model CAN'T fix it? Interview w/ software engineer Landon Gray [Podcast #213]

| Programming | March 27, 2026 | 7.93 Thousand views | 1:32:41

TL;DR

Software engineer Landon Gray explains that LLMs are merely 'raw fuel' requiring 'harnesses' (specialized tooling infrastructure) to produce reliable results, distinguishes AI engineering from data science and ML engineering, and argues developers must understand ML fundamentals to solve critical problems that models themselves cannot fix.

🔗 LLM Harnesses and Infrastructure 3 insights

Harnesses are the true product differentiator

A 'harness' refers to the tooling, constraints, and infrastructure built around raw LLM outputs to structure results, reduce hallucinations, and constrain behavior for specific business needs.

Perplexity's competitive advantage

Perplexity likely uses models like Claude but delivers superior deep research capabilities through sophisticated harness layers that process and refine outputs beyond raw API calls.

Software beats retraining costs

While improving foundational models requires hundreds of millions in training costs, building harness software allows teams to iterate quickly and improve performance through traditional code changes.

🧭 Defining AI Engineering 3 insights

Three distinct data disciplines

Data science focuses on statistical algorithms and Bayesian methods; data engineering handles data plumbing and preparation (consuming 80% of effort); AI engineering applies software development skills to leverage existing models.

Job title confusion

The term 'AI Engineer' is inconsistently used by employers to describe both software developers who build with LLMs and ML engineers who train models, requiring careful reading of job descriptions.

The software engineer's entry point

AI engineering allows software developers to enter the field by leveraging existing coding strengths while gradually learning model fundamentals, rather than requiring immediate deep ML expertise.

🚧 When Models Can't Fix The Code 3 insights

The inevitable bottleneck

Teams relying solely on AI-generated code eventually hit walls—such as latency bottlenecks or architectural constraints—where asking the model to fix the problem produces no solution.

First principles prevent paralysis

Understanding how models work under the hood enables developers to research white papers and architect creative solutions when LLMs fail to diagnose complex system issues.

The accountability gap

Without foundational ML knowledge, teams cannot explain to leadership why critical performance issues persist or how to resolve them when AI tools reach their diagnostic limits.

Bottom Line

Build robust harness tooling around LLMs rather than treating AI as a magic black box, and invest in understanding ML fundamentals so you can architect solutions when the inevitable problems arise that the model cannot fix itself.

More from freeCodeCamp.org

View all
AI Foundations for Absolute Beginners
52:36
freeCodeCamp.org freeCodeCamp.org

AI Foundations for Absolute Beginners

This introductory course teaches AI fundamentals by guiding beginners to build their own image classifier using NearPocket, emphasizing that AI is a data-dependent tool requiring human responsibility rather than an autonomous adversary.

5 days ago · 9 points
Deploying AI Models with Hugging Face – Hands-On Course
6:53:14
freeCodeCamp.org freeCodeCamp.org

Deploying AI Models with Hugging Face – Hands-On Course

This hands-on tutorial demonstrates how to navigate the Hugging Face ecosystem to deploy AI models, focusing on text generation with GPT-2 using both high-level Pipeline APIs and low-level tokenization workflows. The course covers practical implementation details including subword tokenization mechanics and the platform's three core components: Models, Datasets, and Spaces.

6 days ago · 9 points
Software Testing Course – Playwright, E2E, and AI Agents
1:03:31
freeCodeCamp.org freeCodeCamp.org

Software Testing Course – Playwright, E2E, and AI Agents

This comprehensive course demonstrates why software testing is critical insurance against catastrophic failures, explains the testing pyramid framework for balancing test types, and provides hands-on instruction for building end-to-end tests using Playwright with a real e-commerce application.

12 days ago · 9 points