Let AI Agents Tell You What They Need — Raj Navakoti, IKEA

| Podcasts | May 05, 2026 | 5.97 Thousand views | 1:08:15

TL;DR

Raj Navakoti from IKEA reveals why enterprise AI agents fail to move Jira tickets despite strong coding abilities—they lack institutional knowledge. He proposes a 'demand-driven context' method where agents pull knowledge by failing on tasks and demanding specific missing context, transforming undocumented tribal knowledge into curated, reusable blocks.

🏢 The Enterprise Knowledge Crisis 3 insights

The 88% adoption, 6% value paradox

McKinsey data shows 88% of companies use AI but only 6% see value creation because agents cannot resolve delivery tickets requiring undocumented institutional knowledge.

The three knowledge tiers

Agents excel at general coding (green) and learnable skills (orange) but fail at company-specific institutional knowledge (red) needed to complete actual work items.

Monolithic knowledge bases

Enterprise knowledge resembles a monolith with 40% undocumented tribal knowledge, 20% outdated, 20% unreliable, and 10% duplicated across Confluence, Jira, and SharePoint.

⚠️ Why Current Retrieval Fails 3 insights

The MCP server trap

Organizations build 10-20+ MCP servers or RAG systems that push untested, undeterministic data, often achieving only 10-30% accuracy without proper evaluation.

Surface-level retrieval limits

Current retrieval layers stop at document fetching without surfacing what critical information is actually missing from the knowledge base.

Increased manual overhead

Without curated context, engineers end up doing more manual work filling gaps for agents than if they had completed the tasks themselves.

🎯 The Demand-Driven Solution 3 insights

Pull versus push strategy

Instead of pushing all documentation to agents, assign tasks and let them fail, explicitly demanding the specific missing context needed to proceed.

Failure-driven documentation

Each failure cycle surfaces undocumented requirements, which domain experts provide, and the agent then curates into reusable, structured knowledge blocks.

Analogy to TDD and microservices

This mirrors Test-Driven Development and monolith-to-microservices transformation, gradually building reliable institutional knowledge through iterative problem-solving.

⚙️ Implementation and Validation 3 insights

Published research backing

Raj published a preprint on arXiv in March validating this approach with datasets from IKEA's deliverance services domain.

Live demo mechanics

A demonstration showed an agent performing root cause analysis while scoring confidence 1-5 and explicitly listing undocumented business terminology and logic gaps.

Framework-agnostic approach

The method works with any agent platform including Copilot, Claude Code, or cloud solutions using skills, rules, and hooks to manage the knowledge curation cycle.

Bottom Line

Stop building more MCP servers and RAG pipelines; instead, let agents fail on real tasks to surface specific knowledge gaps, then curate the solutions into a demand-driven knowledge base that enables semi-autonomous task completion.

More from AI Engineer

View all
TLMs: Tiny LLMs and Agents on Edge Devices with LiteRT-LM — Cormac Brick, Google
1:20:58
AI Engineer AI Engineer

TLMs: Tiny LLMs and Agents on Edge Devices with LiteRT-LM — Cormac Brick, Google

Cormac Brick from Google AI Edge introduces Tiny LLMs (TLMs) and on-device agent capabilities powered by LiteRT-LM and the new Gemma 4 models, demonstrating how fine-tuned small models (100M-4B parameters) can now deliver sophisticated AI experiences—including multimodal reasoning and tool use—directly on mobile phones, laptops, and even Raspberry Pis without cloud dependency.

2 days ago · 10 points