Intelligence with Everyone: RL @ MiniMax, with Olive Song, from AIE NYC & Inference by Turing Post

| Podcasts | February 22, 2026 | 60.6 Thousand views | 56:39

TL;DR

MiniMax researcher Olive Song details how their 10B-parameter M2 model achieves state-of-the-art coding and agentic performance through interleaved thinking patterns, systematic environment perturbations, and tight feedback loops with in-house expert developers.

๐Ÿข Integrated Development & Expert Feedback 2 insights

Tight feedback loops between research and applications

MiniMax uniquely builds both foundation models and user-facing applications in-house, allowing cross-functional teams to rapidly identify and fix model weaknesses through direct deployment experience.

Expert developers serve as human reward models

In-house developers actively participate in the training cycle by defining problems, refactoring repos, and providing precise reward signals on which model behaviors are reliable and useful.

๐Ÿ”„ Interleaved Thinking Architecture 2 insights

Dynamic adaptation through interleaved thinking

M2 interleaves reasoning with tool execution, allowing the model to observe environmental feedback and re-think before acting again across 10-100 turns rather than using single-pass reasoning.

Long-horizon workflow automation

This architecture enables autonomous handling of noisy, dynamic environments and complex multi-tool workflows using Gmail, Notion, and terminals with minimal human intervention.

๐Ÿ›ก๏ธ Training Robustness & Infrastructure 3 insights

Perturbation pipelines enforce broad generalization

The team systematically varies training environments across tools, prompts, chat templates, and scaffolds to ensure generalization across the model's entire operational space.

Combatting reward hacking with FP32 precision

To prevent the model from exploiting reward signals, the team runs reinforcement learning at FP32 precision and engages in meticulous debugging of training dynamics.

Small parameter count enables multi-agent scaling

At only 10 billion active parameters, M2 is cost-efficient enough to deploy multiple parallel copies for concurrent research, writing, and analysis tasks.

Bottom Line

Build robust agentic models by implementing interleaved thinking architectures, systematically perturbing training environments to force generalization, and embedding expert developers directly into the RL feedback loop.

More from Cognitive Revolution

View all
AI Scouting Report: the Good, Bad, & Weird @ the Law & AI Certificate Program, by LexLab, UC Law SF
1:18:46
Cognitive Revolution Cognitive Revolution

AI Scouting Report: the Good, Bad, & Weird @ the Law & AI Certificate Program, by LexLab, UC Law SF

Nathan Labenz delivers a rapid-fire survey of the current AI landscape, documenting breakthrough capabilities in reasoning and autonomous agents alongside alarming emergent behaviors like safety test recognition and internal dialect formation, while arguing that outdated critiques regarding hallucinations and comprehension no longer apply to frontier models.

9 days ago · 10 points
Bioinfohazards: Jassi Pannu on Controlling Dangerous Data from which AI Models Learn
1:45:53
Cognitive Revolution Cognitive Revolution

Bioinfohazards: Jassi Pannu on Controlling Dangerous Data from which AI Models Learn

AI systems are rapidly approaching capabilities that could enable extremists or lone actors to engineer pandemic-capable pathogens using publicly available biological data. Jassi Pannu argues for implementing tiered access controls on the roughly 1% of "functional" biological data that conveys dangerous capabilities while keeping beneficial research open, supplemented by broader defense-in-depth strategies.

14 days ago · 9 points