Agentic Search for Context Engineering — Leonie Monigatti, Elastic

| Podcasts | May 08, 2026 | 6.33 Thousand views | 1:03:13

TL;DR

Leonie Monigatti from Elastic argues that context engineering is fundamentally 80% agentic search, evolving from rigid RAG pipelines to dynamic agent-driven retrieval that must navigate diverse context sources through carefully curated, specialized search tools.

🔍 Evolution of Search Architecture 3 insights

From fixed pipelines to agentic decisions

Early RAG forced retrieval on every query regardless of necessity, while agentic systems let LLMs decide when retrieval is actually needed and support multi-hop reasoning.

The context engineering paradigm

Effective context engineering requires orchestrating multiple search tools across local files, databases, web, and memory rather than relying on single-vector retrieval.

Shell tool versatility

CLI-based tools (bash/exec) serve as universal adapters, enabling agents to navigate filesystems, execute curl commands, or generate custom scripts for any data source.

⚠️ Critical Failure Points 3 insights

The three breakdowns

Agentic search fails when agents skip tools entirely, select the wrong tool type (e.g., web vs. database), or generate invalid parameters for complex queries.

Acronym ambiguity in semantic search

Basic semantic search struggles with specific keywords and acronyms, as demonstrated when searching 'GPA' returned results about Google's Gemma models instead of GDPR talks.

Parameter complexity gradient

Simple ID lookups work reliably with small models, but free-form query languages like SQL or ESQL require significantly more capable models and careful prompting.

🛠️ Implementation Best Practices 3 insights

Invest in tool descriptions

Comprehensive descriptions must include trigger conditions, explicit 'when not to use' guidance, and tool relationships—not just single-sentence summaries.

Match model capability to tool complexity

General-purpose search tools that write entire queries from scratch demand stronger models (GPT-4o Mini) compared to simple semantic search (GPT-4o Nano).

System prompt reinforcement

When tool descriptions prove insufficient, explicitly codifying tool selection logic in system prompts resolves routing confusion between similar tools.

Bottom Line

Effective agentic search requires curating a diversified toolkit of specialized search methods with exhaustive tool descriptions, rather than expecting a single retrieval method to handle all context engineering challenges.

More from AI Engineer

View all
Vibe Engineering Effect Apps — Michael Arnaldi, Effectful
1:43:04
AI Engineer AI Engineer

Vibe Engineering Effect Apps — Michael Arnaldi, Effectful

Michael Arnaldi demonstrates "vibe engineering" by building a TypeScript project with AI agents, revealing that cloning library repositories directly into your codebase—rather than using npm packages—enables AI to learn patterns from source code, while strict TypeScript and custom lint rules act as essential guardrails.

1 day ago · 8 points
Skills at Scale — Nick Nisi and Zack Proser, WorkOS
AI Engineer AI Engineer

Skills at Scale — Nick Nisi and Zack Proser, WorkOS

Nick Nisi and Zack Proser from WorkOS demonstrate how 'skills'—portable, markdown-based context units—solve the 'cold start' problem of AI coding agents by encoding constraints and deterministic scripts that can be shared across teams and projects, eliminating repetitive context reloading.

2 days ago · 10 points