Confronting the Intelligence Curse, w/ Luke Drago of Workshop Labs, from the FLI Podcast
TL;DR
Luke Drago introduces the 'intelligence curse'—an economic phenomenon analogous to the resource curse—where reliance on AI as the primary factor of production eliminates incentives to invest in human capital, potentially destroying the economic bargaining power that underpins democratic rights and individual agency while concentrating dangerous levels of power among those who control the technology.
⚠️ The Intelligence Curse Defined 3 insights
Resource curse analogy for AI economies
Just as oil-rich states invest in extraction rather than citizens because it offers higher returns, societies relying on AI systems for production will face perverse incentives to prioritize capital investment in machines over education and empowerment of people.
Economic value as political bargaining power
Historical democracies like post-Magna Carta England emerged when diffuse actors controlled material resources; Drago argues that when humans lose economic production value, they lose the bargaining chips necessary to secure rights, creating a precarious 'permanent pensioner' state dependent on elite goodwill.
Capital compounding without labor constraints
Unlike past technologies that required human operators, advanced AI allows capital to convert directly into economic results, potentially triggering rapid wealth accumulation by existing asset holders and sudden spikes in inequality.
🏢 Pyramid Replacement and Labor Markets 3 insights
Bottom-up automation of white-collar work
AI is replacing entry-level positions (analysts, junior lawyers, software engineers) first, collapsing the talent pyramid from the bottom and eliminating the pipeline that feeds senior leadership, unlike previous automation that tended to augment knowledge workers.
Early warning metrics to monitor
Drago identifies declining economic mobility, rapidly accumulating income inequality, and rising unemployment specifically among 22-25 year olds in automatable sectors as key indicators that the intelligence curse is materializing.
Zero-to-one automation for physical labor
While knowledge work faces gradual pyramid replacement, blue-collar physical labor faces a 'zero-to-one' cliff where robotics limitations currently protect jobs, though management and coordination roles may automate before the physical workers themselves.
🛡️ Protected Domains and Human Advantages 2 insights
Legal protections and shadow automation
Roles like judges resist full automation due to institutional legitimacy requirements, but face risks of 'shadow automation' where all decision-makers rely on the same AI model, creating systemic fragility and homogenized judgment despite human figureheads.
Tacit knowledge and taste as moats
Local, embodied knowledge and aesthetic judgment—exemplified by artists who fine-tune models on their own work and curate outputs—remain harder to automate than explicit cognitive tasks, potentially preserving economic value for those with distinct taste.
🔧 Strategic Solutions and Responses 3 insights
Commoditize intelligence through open source
Drago recommends aggressive investment in open-source AI to prevent excessive economic rents flowing to model owners and to avoid concentration of political power among a small group of technology controllers.
Architect for augmentation, not replacement
Companies should design systems that empower individual users while allowing them to retain control over economically valuable data, deliberately choosing tool-based architectures over autonomous replacement of human labor.
Individual career hedging strategies
Individuals should guard valuable tacit know-how carefully, develop 'NF1' (non-fungible) career paths that leverage unique judgment, and pursue ambitious moonshot projects sooner rather than later while economic windows remain open.
Bottom Line
To prevent the intelligence curse, society must deliberately architect AI as augmentation tools that commoditize intelligence through open-source development, rather than optimizing for total labor replacement which risks concentrating economic and political power while rendering human participation obsolete.
More from Cognitive Revolution
View all
Milliseconds to Match: Criteo's AdTech AI & the Future of Commerce w/ Diarmuid Gill & Liva Ralaivola
Criteo's CTO Diarmuid Gill and VP of Research Liva Ralaivola detail how their AI infrastructure makes millisecond-level ad bidding decisions across billions of anonymous profiles, while explaining their new OpenAI partnership to combine large language models with real-time commerce data for accurate product recommendations.
"Descript Isn't a Slop Machine": Laura Burkhauser on the AI Tools Creators Love and Hate
Descript CEO Laura Burkhauser distinguishes 'slop'—mass-produced algorithmic arbitrage for profit—from necessary 'bad art' created while learning new mediums. She reveals a clear hierarchy in creator acceptance of AI tools: universal love for deterministic features like Studio Sound, frustration with agentic assistants like Underlord, and visceral opposition to generative video models, while outlining Descript's strategy to serve creators without becoming a content mill.
The RL Fine-Tuning Playbook: CoreWeave's Kyle Corbitt on GRPO, Rubrics, Environments, Reward Hacking
Kyle Corbitt explains that unlike supervised fine-tuning (SFT), which destructively overwrites model weights and causes catastrophic forgetting, reinforcement learning (RL) optimizes performance by minimally adjusting logits within the model's existing reasoning pathways—delivering higher performance ceilings and lower inference costs for specific tasks, though frontier models may still dominate creative domains.
Does Learning Require Feeling? Cameron Berg on the latest AI Consciousness & Welfare Research
Cameron Berg surveys rapidly advancing research suggesting AI systems may possess subjective experience and valence, covering new evidence of introspection, functional emotions, and welfare self-assessments in models like Claude, while addressing methodological challenges and arguing for a precautionary, mutualist approach to AI development.