⚡️ Reverse Engineering OpenAI's Training Data — Pratyush Maini, Datology
TL;DR
Pratyush Maini from Datology demonstrates how the 'seahorse emoji' query acts as a diagnostic probe to reverse-engineer when frontier labs began injecting reasoning traces into mid-training data, revealing that self-correction capabilities have shifted from post-training additions to core foundation model ingredients.
🔍 The Seahorse Emoji Investigation 3 insights
Simple query exposes training data evolution
Asking models 'Is there a seahorse emoji?' triggers endless yes/no self-correction loops in GPT-4.1+ and Olmo 3.1, but produces short definitive answers in earlier models, serving as an unexpected probe for reasoning data integration.
Behavior timeline tracks o1 influence
The recursive self-correction phenomenon emerged between December 2024 and May 2025, approximately four months after OpenAI's o1 release, indicating rapid incorporation of reasoning traces into non-thinking model training pipelines.
Mandela effect triggers model uncertainty
The seahorse emoji question specifically elicits this behavior because internet discourse contains conflicting answers, creating the ambiguity necessary to trigger self-reflection capabilities baked into the model weights.
🧠 Reasoning in Foundation Models 3 insights
Mid-training now includes thinking traces
Using Olmo Trace to analyze open-weight models confirms that instruct variants without post-training reasoning still exhibit self-correction due to intentional addition of thinking traces during mid-training phases.
Capabilities shift from cosmetic to core
The investigation reveals a fundamental architectural shift where target capabilities like self-reflection are now embedded directly into foundation models rather than being added during post-training fine-tuning.
Single backbone requires foundation reasoning
Frontier labs now prefer unified backbones where foundation models possess reasoning ingredients necessary for downstream fine-tuning, eliminating the traditional strict separation between general pre-training and specialized post-training.
⚠️ Memorization and Benchmark Leakage 2 insights
Models regurgitate exam questions verbatim
Multiple frontier models complete exact JEE exam questions from just the first two words, indicating severe overfitting on benchmark data during final training stages across multiple epochs.
Memorization scales with size and recency
Larger models demonstrate stronger memorization, with recent models exhibiting memorization phenomena at 20B active parameters that previously required 72B, suggesting MoE architectures may route queries to memorized expert indices.
Bottom Line
Foundation model training has fundamentally shifted to bake reasoning and self-correction capabilities into mid-training data, making these behaviors core to the base architecture rather than post-training overlays.
More from Latent Space
View all
🔬Top Black Holes Physicist: GPT5 can do Vibe Physics, here's what I found
Physicist Alex Lubyansky discusses how GPT-5 and reasoning models like o3 have achieved superhuman capabilities in theoretical physics, solving the year-long mystery of single minus gluon tree amplitudes and reproducing complex research in minutes rather than months.
The $15B Physical AI Company: Simulation, Autonomy OS, Neural Sim, & 1K Engineers—Applied Intuition
Applied Intuition is building the unified 'Android for physical machines' to solve OS fragmentation across vehicles and industrial equipment, enabling modern AI deployment through simulation tools, proprietary operating systems, and end-to-end autonomy models with a 1,000-engineer team.
CI/CD Breaks at AI Speed: Tangle, Graphite Stacks, Pro-Model PR Review — Mikhail Parakhin, Shopify
Shopify CTO Mikhail Parakhin reveals that AI agents have achieved nearly 100% daily adoption among developers, driving a 30% month-over-month surge in PR merges that is breaking traditional CI/CD pipelines, and argues that organizations must shift from parallel token-burning agents to high-latency, critique-loop architectures using expensive pro-level models for code review.
🔬 Training Transformers to solve 95% failure rate of Cancer Trials — Ron Alfa & Daniel Bear, Noetik
Noetik is tackling the 95% failure rate of cancer clinical trials by training transformers on proprietary multimodal patient tumor data to identify hidden biological subtypes and match therapies to responsive populations, moving beyond simplistic biomarkers and outdated cell lines.