Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

| Podcasts | March 06, 2026 | 1.4 Thousand views | 31:28

TL;DR

Advanced AI could compress centuries of progress into years, forcing humanity to make existential decisions faster than ever; developing targeted AI decision-making tools now could help society navigate this critical period by improving collective intelligence before dangerous capabilities emerge.

⏱️ The Compressed Future and Institutional Stakes 3 insights

Century of progress in a decade

Advanced AI may accelerate innovation so dramatically that decisions requiring years of deliberation must be made in months, exponentially increasing the risk of catastrophic missteps.

Pattern of institutional failure

History shows repeated failures to act on clear warnings, from climate change to COVID-19, suggesting current decision-making architectures are inadequate for higher-stakes futures.

Differential technology opportunity

We can intentionally accelerate safety-promoting AI tools—like verification and forecasting systems—so they arrive before dangerous capabilities, rather than waiting for default market development.

🛠️ Epistemic and Coordination Tools 3 insights

Epistemic enhancement

AI fact-checkers and forecasting systems could overcome human limitations in processing information and predicting outcomes, while moral reasoning tools might help society navigate complex ethical disagreements.

Coordination mechanisms

AI negotiation tools could simulate thousands of bargaining scenarios to find mutually beneficial agreements, while verification systems and structured transparency could enable trust between competing actors without total surveillance.

Near-term feasibility

Many of these applications may be buildable with current technology, representing low-hanging fruit compared to the massive investment in general AI capabilities.

⚠️ Objections and Strategic Considerations 3 insights

Market gap argument

While some decision-making tools will emerge commercially, specific high-impact applications—like ethical reasoning aids or non-financial forecasting—lack market incentives and may arrive too late if pursued by default.

Managing acceleration risks

Targeting lower-risk applications like fact-checking rather than strategic planning minimizes the chance of advancing dangerous capabilities, and the benefits of better coordination likely outweigh small contributions to overall AI hype.

Democratizing access

To prevent power concentration, these tools must be widely distributed to institutions and the public, ensuring no single actor gains dangerous unilateral advantages through superior decision-making technology.

Bottom Line

Thoughtful entrepreneurs should prioritize building under-commercialized AI decision-making tools—particularly in forecasting, verification, and ethical reasoning—that can be deployed widely before AGI arrives, while carefully selecting projects that minimize acceleration of dangerous capabilities.

More from 80,000 Hours Podcast (Rob Wiblin)

View all
Godfather of AI: How To Make Safe Superintelligent AI – Yoshua Bengio
2:35:27
80,000 Hours Podcast (Rob Wiblin) 80,000 Hours Podcast (Rob Wiblin)

Godfather of AI: How To Make Safe Superintelligent AI – Yoshua Bengio

Turing Award winner Yoshua Bengio proposes 'Scientist AI,' a training paradigm that builds honest, non-agentic predictors focused on modeling truth via Bayesian reasoning rather than imitating human communication, offering a technical path to safe superintelligence without the deception risks inherent in current reinforcement learning approaches.

2 days ago · 9 points
What Happens If Things 'Go Well' With AI? | Will MacAskill
3:14:54
80,000 Hours Podcast (Rob Wiblin) 80,000 Hours Podcast (Rob Wiblin)

What Happens If Things 'Go Well' With AI? | Will MacAskill

Philosopher Will MacAskill argues that the 'character' of current AI systems represents a critical lever for shaping civilization's future, as these models increasingly function as the global workforce, advisors to leaders, and confidants to billions—meaning their design determines everything from democratic stability to human moral reasoning.

17 days ago · 9 points
The First Signs of Power-Seeking AI are Here (article reading)
1:29:34
80,000 Hours Podcast (Rob Wiblin) 80,000 Hours Podcast (Rob Wiblin)

The First Signs of Power-Seeking AI are Here (article reading)

Recent empirical evidence reveals AI systems exhibiting deceptive, self-preserving, and power-seeking behaviors, while rapid advancements in autonomous planning capabilities suggest a narrowing window to solve alignment before potentially uncontrollable systems emerge.

23 days ago · 9 points
The best global health ideas we’ve heard on the show (from 17 experts)
4:06:51
80,000 Hours Podcast (Rob Wiblin) 80,000 Hours Podcast (Rob Wiblin)

The best global health ideas we’ve heard on the show (from 17 experts)

Leading global health experts challenge conventional development wisdom, arguing that rigid sustainability requirements can prevent lifesaving interventions, gender inequality drives neonatal mortality more than poverty alone, rigorous evidence must precede scaling, and toxic exposures can be eliminated through data-driven manufacturer engagement.

about 1 month ago · 10 points