Snowflake VP of AI Baris Gultekin on Bringing AI to Data, Agent Design, Text-2-SQL, RAG & More

| Podcasts | January 14, 2026 | 57.1 Thousand views | 1:40:51

TL;DR

Snowflake is deploying enterprise AI by bringing models directly to governed data rather than exporting sensitive information, leveraging reasoning models to unlock the 80-90% of unstructured enterprise data previously trapped in documents and enabling reliable natural language analytics for non-technical business users.

🏢 The "AI to Data" Architecture 3 insights

Bring AI to data, not data to models

Snowflake's core enterprise strategy runs AI compute next to stored data to satisfy strict security, governance, and data residency requirements, avoiding the risks of sending sensitive information to external model providers.

Unstructured data becomes queryable

Enterprises can now activate the 80-90% of data previously trapped in PDFs and documents, enabling analysis of contracts, equities research, and compliance files alongside traditional structured databases for the first time.

Semantic context from existing BI assets

Snowflake extracts semantic meaning from metadata, existing BI dashboards, and historical query logs to help AI agents understand business definitions like 'revenue' across thousands of tables without manual documentation.

💬 Natural Language Analytics & Text-to-SQL 3 insights

Reasoning models unlock business-user reliability

Text-to-SQL crossed the deployment threshold in the last 6-12 months as reasoning models improved, enabling Snowflake Intelligence (the company's fastest-growing product) to serve non-analysts who previously waited weeks for insights.

Open Semantic Interchange standard

Snowflake is developing an open standard with Tableau, Omni, and other BI platforms to allow semantic models built in one system to be portable across vendors, reducing vendor lock-in and accelerating AI deployment.

High-stakes accuracy requirements

Unlike creative AI use cases, analytics demands single correct answers (e.g., 'what's my revenue'), requiring rigorous semantic modeling to resolve ambiguities across hundreds of thousands of columns and complex table relationships.

📄 RAG & Document Intelligence 3 insights

Web-scale search infrastructure

Snowflake leverages technology from its 2023 acquisition of Neva (an AI search engine) to power enterprise RAG, focusing on embedding model quality, hybrid search, and re-ranking to handle messy PDFs with images, tables, and multi-column layouts.

Analytical agentic document processing

Advanced RAG now enables 'analytical' queries across thousands of documents—such as calculating average revenue over ten years from scattered quarterly reports—rather than simple retrieval of specific passages.

Automation of chunking strategies

The field is moving away from manual chunking configuration toward automated systems that determine optimal document segmentation and extraction strategies without heavy AI engineering intervention.

⚖️ Model Selection: Frontier vs. Specialized 3 insights

Frontier models for complexity, specialized for scale

Claude 4.5 or Gemini 3 handle small volumes of complex documents effectively, but processing hundreds of millions of documents requires Snowflake's specialized extraction models, which are orders of magnitude smaller, cheaper, and faster.

Throughput and cost drive architecture decisions

Enterprise document processing pipelines prioritize inference speed and cost efficiency over general capability, making fine-tuned small models essential for high-volume workflows despite the power of frontier LLMs.

Enterprise fine-tuning remains niche

Custom model training is reserved for specific scenarios where enterprises possess large volumes of unique data the base model has never encountered and face strict throughput or cost constraints that off-the-shelf solutions cannot meet.

Bottom Line

Deploy practical AI agents today by keeping data within your governed environment, leveraging reasoning models to enable natural language queries for business users, and selecting model sizes based on document volume and cost constraints rather than defaulting to the largest available frontier models.

More from Cognitive Revolution

View all
AI Scouting Report: the Good, Bad, & Weird @ the Law & AI Certificate Program, by LexLab, UC Law SF
1:18:46
Cognitive Revolution Cognitive Revolution

AI Scouting Report: the Good, Bad, & Weird @ the Law & AI Certificate Program, by LexLab, UC Law SF

Nathan Labenz delivers a rapid-fire survey of the current AI landscape, documenting breakthrough capabilities in reasoning and autonomous agents alongside alarming emergent behaviors like safety test recognition and internal dialect formation, while arguing that outdated critiques regarding hallucinations and comprehension no longer apply to frontier models.

9 days ago · 10 points
Bioinfohazards: Jassi Pannu on Controlling Dangerous Data from which AI Models Learn
1:45:53
Cognitive Revolution Cognitive Revolution

Bioinfohazards: Jassi Pannu on Controlling Dangerous Data from which AI Models Learn

AI systems are rapidly approaching capabilities that could enable extremists or lone actors to engineer pandemic-capable pathogens using publicly available biological data. Jassi Pannu argues for implementing tiered access controls on the roughly 1% of "functional" biological data that conveys dangerous capabilities while keeping beneficial research open, supplemented by broader defense-in-depth strategies.

14 days ago · 9 points