Agentic Search for Context Engineering — Leonie Monigatti, Elastic
TL;DR
Leonie Monigatti from Elastic argues that context engineering is fundamentally 80% agentic search, evolving from rigid RAG pipelines to dynamic agent-driven retrieval that must navigate diverse context sources through carefully curated, specialized search tools.
🔍 Evolution of Search Architecture 3 insights
From fixed pipelines to agentic decisions
Early RAG forced retrieval on every query regardless of necessity, while agentic systems let LLMs decide when retrieval is actually needed and support multi-hop reasoning.
The context engineering paradigm
Effective context engineering requires orchestrating multiple search tools across local files, databases, web, and memory rather than relying on single-vector retrieval.
Shell tool versatility
CLI-based tools (bash/exec) serve as universal adapters, enabling agents to navigate filesystems, execute curl commands, or generate custom scripts for any data source.
⚠️ Critical Failure Points 3 insights
The three breakdowns
Agentic search fails when agents skip tools entirely, select the wrong tool type (e.g., web vs. database), or generate invalid parameters for complex queries.
Acronym ambiguity in semantic search
Basic semantic search struggles with specific keywords and acronyms, as demonstrated when searching 'GPA' returned results about Google's Gemma models instead of GDPR talks.
Parameter complexity gradient
Simple ID lookups work reliably with small models, but free-form query languages like SQL or ESQL require significantly more capable models and careful prompting.
🛠️ Implementation Best Practices 3 insights
Invest in tool descriptions
Comprehensive descriptions must include trigger conditions, explicit 'when not to use' guidance, and tool relationships—not just single-sentence summaries.
Match model capability to tool complexity
General-purpose search tools that write entire queries from scratch demand stronger models (GPT-4o Mini) compared to simple semantic search (GPT-4o Nano).
System prompt reinforcement
When tool descriptions prove insufficient, explicitly codifying tool selection logic in system prompts resolves routing confusion between similar tools.
Bottom Line
Effective agentic search requires curating a diversified toolkit of specialized search methods with exhaustive tool descriptions, rather than expecting a single retrieval method to handle all context engineering challenges.
More from AI Engineer
View all
Playground in Prod - Optimising Agents in Production Environments — Samuel Colvin, Pydantic
Samuel Colvin demonstrates optimizing AI agent prompts in production using Jepper, a genetic algorithm library that breeds high-performing prompt variations, combined with Logfire's managed variables for structured configuration and deterministic evaluation against golden datasets.
Vibe Engineering Effect Apps — Michael Arnaldi, Effectful
Michael Arnaldi demonstrates "vibe engineering" by building a TypeScript project with AI agents, revealing that cloning library repositories directly into your codebase—rather than using npm packages—enables AI to learn patterns from source code, while strict TypeScript and custom lint rules act as essential guardrails.
Everything You Need To Know About Agent Observability — Danny Gollapalli and Ben Hylak, Raindrop
As AI agents grow more complex and autonomous, traditional pre-deployment testing fails to catch the infinite edge cases of production behavior. The video outlines a new observability paradigm combining explicit system metrics with implicit semantic signals and self-diagnostics to monitor agents in real-time.
Skills at Scale — Nick Nisi and Zack Proser, WorkOS
Nick Nisi and Zack Proser from WorkOS demonstrate how 'skills'—portable, markdown-based context units—solve the 'cold start' problem of AI coding agents by encoding constraints and deterministic scripts that can be shared across teams and projects, eliminating repetitive context reloading.