AI Enterprise - Databricks & Glean | BG2 Guest Interview
TL;DR
Databricks and Glean executives argue that while 95% of enterprise AI projects currently fail, this reflects necessary experimentation in a market where LLMs have become commodities and true competitive advantage comes from leveraging proprietary data through learning-based systems rather than brittle automation.
💼 The Enterprise AI Reality 3 insights
95% failure rate signals healthy experimentation
High failure rates indicate companies are aggressively testing AI rather than waiting for perfect solutions, which is the desired state for emerging technology adoption.
LLMs have become interchangeable commodities
Like gasoline from different stations, foundation models are now comparable on price alone, with leadership shifting weekly, making model selection less strategic than data leverage.
Three distinct camps in the AI bubble
The market contains super-intelligence seekers (in a bubble), sober researchers (likely correct but ignored), and value creators focused on economic utility (where Databricks and Glean position themselves).
🎯 High-Value Use Cases That Work 4 insights
RBC automates equity research in 15 minutes
Royal Bank of Canada built agents that compress earnings report analysis from two hours to 15 minutes by aggregating market data, competitor filings, and news.
Merck's Teddy model revolutionizes drug discovery
The pharmaceutical giant created a transformer model that predicts gene regulatory networks and missing genomes, enabling breakthrough capabilities in understanding gene expression.
7-Eleven automates granular marketing segmentation
The retailer uses agents to automate audience segmentation and personalized content creation, moving from broad demographic targeting to individualized campaign materials at scale.
Success requires proprietary data leverage
Working implementations consistently utilize unique company data and specific business processes that competitors cannot easily replicate, avoiding commodity 'demo wear'.
⚡ Strategic Implementation Framework 3 insights
Data strategy must precede AI strategy
Organizations cannot achieve AI differentiation without first organizing their proprietary data houses, as competitive moats reside in unique datasets rather than model capabilities.
Fundamental shift from RPA rules to learning systems
Unlike brittle, rule-based RPA that required explicit programming for every scenario, modern AI generalizes and improves through pattern recognition, handling unexpected desktop variations.
CIOs should pursue parallel experimentation
Leaders should allocate budgets across multiple vendors with shorter-term contracts, prioritizing products that demonstrate value quickly without six-month implementation cycles.
Bottom Line
Enterprise AI success requires focusing investments on proprietary data integration and unique business process automation rather than commoditized LLM capabilities, while accepting high initial failure rates as the necessary cost of experimentation.
More from BG2Pod
View all
ChatGPT – The Super Assistant Era | BG2 Guest Interview
OpenAI's Nick Turley reveals ChatGPT evolved from a planned one-month demo to a 900-million-user product by prioritizing long-term retention over short-term revenue, with future growth hinging on transforming the AI from a passive chat tool into a proactive super assistant capable of autonomous action.
All things AI w @altcap @sama & @satyanadella. A Halloween Special. 🎃🔥BG2 w/ Brad Gerstner
OpenAI and Microsoft executives detail their restructured partnership, revealing a $130 billion nonprofit controlling OpenAI's public benefit corporation, exclusive Azure hosting until 2030 or verified AGI, and over $1 trillion in compute commitments justified by steep revenue growth and insatiable demand for AI capabilities.
AI Bubble, Stablecoin Boom, and Runnin' Down a Dream | BG2 w/ Bill Gurley and Brad Gerstner
Bill Gurley announces his departure as BG2 co-host to focus on policy work and his new book, while he and Brad Gerstner debate whether AI infrastructure spending constitutes a bubble, analyzing circular revenue transactions, unprecedented Big Tech capex levels, and the competitive dynamics driving potential overbuilding.
NVIDIA: OpenAI, Future of Compute, and the American Dream | BG2 w/ Bill Gurley and Brad Gerstner
Jensen Huang argues that AI compute demand will explode due to three compounding scaling laws (pre-training, post-training, and reasoning), while dismissing Wall Street fears of a coming glut by framing the shift from general-purpose to accelerated computing as a multi-trillion dollar infrastructure transition that is still in its early innings.