⚡️ Context graphs: AI’s trillion-dollar opportunity — Jaya Gupta, Ashu Garg, Foundation Capital
TL;DR
Context graphs represent the emerging institutional memory layer for AI agents, capturing 'decision traces'—the reasoning behind human and agent actions—to solve enterprise reliability gaps and create durable competitive moats as foundation models commoditize raw AI capabilities.
🧠 Defining Context Graphs 3 insights
Capturing the 'why' behind institutional decisions
Unlike traditional systems that record what happened (endpoints), context graphs track why exceptions were granted, how conflicts were resolved, and what precedents were applied, preserving institutional knowledge that was previously trapped in human heads, emails, and Slack threads.
Decision traces as the atomic unit
A decision trace captures the complete sequence of an operational workflow—including queries made, agent analysis, human-in-the-loop overrides, and cross-system context—creating machine-usable history that agents can learn from to improve future execution.
Distinct from metadata and knowledge graphs
While critics label this 'metadata 3.0,' decision traces differ fundamentally because they emerge as byproducts of execution rather than being modeled upfront through workshops and ETL processes, and they capture operational logic rather than just static relationships.
🏢 The Enterprise AI Imperative 2 insights
Closing the reliability gap in agent systems
Despite 2025 bringing capable agents like Claude, Devon, Sierra, and Decagon, enterprise deployment remains limited because agents lack access to historical reasoning—specifically the tacit knowledge of how similar situations were handled and why specific approaches failed or succeeded.
Systems of agents vs. single-player chatbots
Foundation Capital distinguishes between basic chatbots and 'systems of agents'—multi-agent, multiplayer architectures with humans in the loop that drive decisions across entire business processes, where context graphs serve as the critical memory layer enabling complex orchestration.
⚙️ Architecture and Implementation 3 insights
Write path capture vs. read path analytics
Context graphs are built in the operational 'write path'—capturing the sequence of steps, queries, and decisions as they happen—unlike data warehouses or systems of record that only store structured endpoints after decisions conclude, missing the reasoning journey entirely.
Unstructured data as the primary challenge
Because most organizational value lives in unstructured dark data (Zoom calls, email threads, ad-hoc Slack messages), effective implementations require specialized models to extract signal from noise, with approaches varying from semi-structured code analysis (Player Zero) to conversational parsing.
Governance and privacy as foundational requirements
Capturing decision traces requires solving sensitive data management and PII handling (exemplified by companies like Skyflow), as these traces cross organizational boundaries, capture privileged human deliberations, and require strict controls on what agents can access and learn from.
⚔️ Competitive Landscape and Moats 3 insights
Startups hold the architectural advantage
New entrants operating in the orchestration path have a unique advantage over incumbents like Salesforce because they are not bound to legacy structured data schemas or existing business processes, allowing them to natively stitch context across silos rather than reconstructing it after the fact.
Proprietary context as the defensible layer
As foundation models commoditize raw AI capabilities, the accumulation of proprietary decision traces becomes the primary enduring moat for application companies, creating a data flywheel where agent execution improves organizational context, which in turn improves agent performance.
Multiple implementation categories emerging
The concept is being implemented across diverse categories including application layers (Player Zero, Olive, Tacera), data infrastructure (Glean, Atlan, graph databases), and security governance (Okta, Analogic), with no single standard data structure yet dominating.
Bottom Line
As AI capabilities commoditize through foundation models, the only durable competitive advantage for enterprise AI companies will be proprietary context graphs built by capturing decision traces natively within operational workflows, making the shift from analytical 'read' systems to operational 'write' systems the defining architectural priority for the next decade.
More from Latent Space
View all
🔬There Is No AlphaFold for Materials — AI for Materials Discovery with Heather Kulik
MIT professor Heather Kulik explains how AI discovered quantum phenomena to create 4x tougher polymers and why materials science lacks an 'AlphaFold' equivalent due to missing experimental datasets, emphasizing that domain expertise remains essential to validate AI predictions in chemistry.
Dreamer: the Agent OS for Everyone — David Singleton
David Singleton introduces Dreamer as an 'Agent OS' that combines a personal AI Sidekick with a marketplace of tools and agents, enabling both non-technical users and engineers to build, customize, and deploy AI applications through natural language while maintaining privacy through centralized, OS-level architecture.
Why Anthropic Thinks AI Should Have Its Own Computer — Felix Rieseberg of Claude Cowork/Code
Anthropic's Felix Rieseberg explains why AI agents need their own virtual computers to be effective, arguing that confining Claude to chat interfaces severely limits capability. He details how this philosophy shaped Claude Cowork and why product development is shifting from lengthy planning to rapidly building multiple prototypes simultaneously.
⚡️Monty: the ultrafast Python interpreter by Agents for Agents — Samuel Colvin, Pydantic
Samuel Colvin from Pydantic introduces Monty, a Rust-based Python interpreter designed specifically for AI agents that achieves sub-microsecond execution latency by running in-process, bridging the gap between rigid tool calling and heavy containerized sandboxes.