Snowflake VP of AI Baris Gultekin on Bringing AI to Data, Agent Design, Text-2-SQL, RAG & More

| Podcasts | January 14, 2026 | 57.1 Thousand views | 1:40:51

TL;DR

Snowflake is deploying enterprise AI by bringing models directly to governed data rather than exporting sensitive information, leveraging reasoning models to unlock the 80-90% of unstructured enterprise data previously trapped in documents and enabling reliable natural language analytics for non-technical business users.

🏢 The "AI to Data" Architecture 3 insights

Bring AI to data, not data to models

Snowflake's core enterprise strategy runs AI compute next to stored data to satisfy strict security, governance, and data residency requirements, avoiding the risks of sending sensitive information to external model providers.

Unstructured data becomes queryable

Enterprises can now activate the 80-90% of data previously trapped in PDFs and documents, enabling analysis of contracts, equities research, and compliance files alongside traditional structured databases for the first time.

Semantic context from existing BI assets

Snowflake extracts semantic meaning from metadata, existing BI dashboards, and historical query logs to help AI agents understand business definitions like 'revenue' across thousands of tables without manual documentation.

đź’¬ Natural Language Analytics & Text-to-SQL 3 insights

Reasoning models unlock business-user reliability

Text-to-SQL crossed the deployment threshold in the last 6-12 months as reasoning models improved, enabling Snowflake Intelligence (the company's fastest-growing product) to serve non-analysts who previously waited weeks for insights.

Open Semantic Interchange standard

Snowflake is developing an open standard with Tableau, Omni, and other BI platforms to allow semantic models built in one system to be portable across vendors, reducing vendor lock-in and accelerating AI deployment.

High-stakes accuracy requirements

Unlike creative AI use cases, analytics demands single correct answers (e.g., 'what's my revenue'), requiring rigorous semantic modeling to resolve ambiguities across hundreds of thousands of columns and complex table relationships.

đź“„ RAG & Document Intelligence 3 insights

Web-scale search infrastructure

Snowflake leverages technology from its 2023 acquisition of Neva (an AI search engine) to power enterprise RAG, focusing on embedding model quality, hybrid search, and re-ranking to handle messy PDFs with images, tables, and multi-column layouts.

Analytical agentic document processing

Advanced RAG now enables 'analytical' queries across thousands of documents—such as calculating average revenue over ten years from scattered quarterly reports—rather than simple retrieval of specific passages.

Automation of chunking strategies

The field is moving away from manual chunking configuration toward automated systems that determine optimal document segmentation and extraction strategies without heavy AI engineering intervention.

⚖️ Model Selection: Frontier vs. Specialized 3 insights

Frontier models for complexity, specialized for scale

Claude 4.5 or Gemini 3 handle small volumes of complex documents effectively, but processing hundreds of millions of documents requires Snowflake's specialized extraction models, which are orders of magnitude smaller, cheaper, and faster.

Throughput and cost drive architecture decisions

Enterprise document processing pipelines prioritize inference speed and cost efficiency over general capability, making fine-tuned small models essential for high-volume workflows despite the power of frontier LLMs.

Enterprise fine-tuning remains niche

Custom model training is reserved for specific scenarios where enterprises possess large volumes of unique data the base model has never encountered and face strict throughput or cost constraints that off-the-shelf solutions cannot meet.

Bottom Line

Deploy practical AI agents today by keeping data within your governed environment, leveraging reasoning models to enable natural language queries for business users, and selecting model sizes based on document volume and cost constraints rather than defaulting to the largest available frontier models.

More from Cognitive Revolution

View all
"Descript Isn't a Slop Machine": Laura Burkhauser on the AI Tools Creators Love and Hate
1:23:53
Cognitive Revolution Cognitive Revolution

"Descript Isn't a Slop Machine": Laura Burkhauser on the AI Tools Creators Love and Hate

Descript CEO Laura Burkhauser distinguishes 'slop'—mass-produced algorithmic arbitrage for profit—from necessary 'bad art' created while learning new mediums. She reveals a clear hierarchy in creator acceptance of AI tools: universal love for deterministic features like Studio Sound, frustration with agentic assistants like Underlord, and visceral opposition to generative video models, while outlining Descript's strategy to serve creators without becoming a content mill.

4 days ago · 10 points
The RL Fine-Tuning Playbook: CoreWeave's Kyle Corbitt on GRPO, Rubrics, Environments, Reward Hacking
1:48:43
Cognitive Revolution Cognitive Revolution

The RL Fine-Tuning Playbook: CoreWeave's Kyle Corbitt on GRPO, Rubrics, Environments, Reward Hacking

Kyle Corbitt explains that unlike supervised fine-tuning (SFT), which destructively overwrites model weights and causes catastrophic forgetting, reinforcement learning (RL) optimizes performance by minimally adjusting logits within the model's existing reasoning pathways—delivering higher performance ceilings and lower inference costs for specific tasks, though frontier models may still dominate creative domains.

8 days ago · 10 points