AI Enterprise - Databricks & Glean | BG2 Guest Interview

| Podcasts | December 23, 2025 | 23.8 Thousand views | 45:01

TL;DR

Databricks and Glean executives argue that while 95% of enterprise AI projects currently fail, this reflects necessary experimentation in a market where LLMs have become commodities and true competitive advantage comes from leveraging proprietary data through learning-based systems rather than brittle automation.

💼 The Enterprise AI Reality 3 insights

95% failure rate signals healthy experimentation

High failure rates indicate companies are aggressively testing AI rather than waiting for perfect solutions, which is the desired state for emerging technology adoption.

LLMs have become interchangeable commodities

Like gasoline from different stations, foundation models are now comparable on price alone, with leadership shifting weekly, making model selection less strategic than data leverage.

Three distinct camps in the AI bubble

The market contains super-intelligence seekers (in a bubble), sober researchers (likely correct but ignored), and value creators focused on economic utility (where Databricks and Glean position themselves).

🎯 High-Value Use Cases That Work 4 insights

RBC automates equity research in 15 minutes

Royal Bank of Canada built agents that compress earnings report analysis from two hours to 15 minutes by aggregating market data, competitor filings, and news.

Merck's Teddy model revolutionizes drug discovery

The pharmaceutical giant created a transformer model that predicts gene regulatory networks and missing genomes, enabling breakthrough capabilities in understanding gene expression.

7-Eleven automates granular marketing segmentation

The retailer uses agents to automate audience segmentation and personalized content creation, moving from broad demographic targeting to individualized campaign materials at scale.

Success requires proprietary data leverage

Working implementations consistently utilize unique company data and specific business processes that competitors cannot easily replicate, avoiding commodity 'demo wear'.

Strategic Implementation Framework 3 insights

Data strategy must precede AI strategy

Organizations cannot achieve AI differentiation without first organizing their proprietary data houses, as competitive moats reside in unique datasets rather than model capabilities.

Fundamental shift from RPA rules to learning systems

Unlike brittle, rule-based RPA that required explicit programming for every scenario, modern AI generalizes and improves through pattern recognition, handling unexpected desktop variations.

CIOs should pursue parallel experimentation

Leaders should allocate budgets across multiple vendors with shorter-term contracts, prioritizing products that demonstrate value quickly without six-month implementation cycles.

Bottom Line

Enterprise AI success requires focusing investments on proprietary data integration and unique business process automation rather than commoditized LLM capabilities, while accepting high initial failure rates as the necessary cost of experimentation.

More from BG2Pod

View all
ChatGPT – The Super Assistant Era | BG2 Guest Interview
1:03:41
BG2Pod BG2Pod

ChatGPT – The Super Assistant Era | BG2 Guest Interview

OpenAI's Nick Turley reveals ChatGPT evolved from a planned one-month demo to a 900-million-user product by prioritizing long-term retention over short-term revenue, with future growth hinging on transforming the AI from a passive chat tool into a proactive super assistant capable of autonomous action.

10 days ago · 10 points