You can't just one shot it — Mehedi Hassan, Granola
TL;DR
Mehedi Hassan explains why simply adding AI features with a single prompt ('one-shotting') fails in production, advocating instead for tight feedback loops through custom tracing infrastructure and rapid iteration frameworks to refine LLM behavior for specific use cases.
💥 The Limits of 'One-Shot' AI Integration 3 insights
Generic chatbots misunderstand nuanced context
Simple chat implementations fail at nuanced queries like distinguishing between 'coach' as a sports role versus business mentorship, leading to irrelevant outputs.
Web search tools hide exploding costs
While adding web search appears as simple as one line of code, token usage can reach 10 pence per chat at scale, making it economically unfeasible for millions of users.
Single prompts cannot serve diverse user roles
Sales teams need deal-focused outputs while engineers require action items and Linear tickets, making universal prompts ineffective across different personas.
🔍 Building Transparency into the Black Box 3 insights
Custom tracing tools reveal LLM decision-making
Granola built internal visibility tools to track tool calls, reasoning steps, and costs from start to finish, treating off-the-shelf SaaS solutions as insufficient for their needs.
Structured data enables cross-functional debugging
The tracing interface serves not just engineers but also product, data, and CX teams, eliminating the need for complex CloudWatch queries to identify failures.
LLMs accelerate internal tooling, not user features
Unlike user-facing features, internal tools like tracing systems can be effectively 'one-shotted' with LLMs, allowing rapid development of custom observability infrastructure.
🚀 Engineering for Rapid Iteration 3 insights
Abstracting Electron to web standards
Granola transformed their desktop app's frontend into a web shell deployable online, enabling CI-generated preview links for parallel feature testing without local dependency friction.
AI-powered self-verification of code changes
Cursor automatically tests pull requests and uploads screenshots to PRs, drastically speeding up the review process without manual testing environments.
Desktop constraints require creative solutions
Because Granola runs as a single-instance desktop app, they made the render process environment-agnostic by abstracting IPC APIs to fall back to web standards when needed.
Bottom Line
Stop trying to perfect AI features with better single prompts; instead, build infrastructure that lets you rapidly test, trace, and iterate with your LLM like a game of tennis until the output feels like magic.
More from AI Engineer
View all
Agentic Search for Context Engineering — Leonie Monigatti, Elastic
Leonie Monigatti from Elastic argues that context engineering is fundamentally 80% agentic search, evolving from rigid RAG pipelines to dynamic agent-driven retrieval that must navigate diverse context sources through carefully curated, specialized search tools.
Playground in Prod - Optimising Agents in Production Environments — Samuel Colvin, Pydantic
Samuel Colvin demonstrates optimizing AI agent prompts in production using Jepper, a genetic algorithm library that breeds high-performing prompt variations, combined with Logfire's managed variables for structured configuration and deterministic evaluation against golden datasets.
Vibe Engineering Effect Apps — Michael Arnaldi, Effectful
Michael Arnaldi demonstrates "vibe engineering" by building a TypeScript project with AI agents, revealing that cloning library repositories directly into your codebase—rather than using npm packages—enables AI to learn patterns from source code, while strict TypeScript and custom lint rules act as essential guardrails.
Everything You Need To Know About Agent Observability — Danny Gollapalli and Ben Hylak, Raindrop
As AI agents grow more complex and autonomous, traditional pre-deployment testing fails to catch the infinite edge cases of production behavior. The video outlines a new observability paradigm combining explicit system metrics with implicit semantic signals and self-diagnostics to monitor agents in real-time.