Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

| Podcasts | March 06, 2026 | 1.26 Thousand views | 31:28

TL;DR

Advanced AI could compress centuries of progress into years, forcing humanity to make existential decisions faster than ever; developing targeted AI decision-making tools now could help society navigate this critical period by improving collective intelligence before dangerous capabilities emerge.

⏱️ The Compressed Future and Institutional Stakes 3 insights

Century of progress in a decade

Advanced AI may accelerate innovation so dramatically that decisions requiring years of deliberation must be made in months, exponentially increasing the risk of catastrophic missteps.

Pattern of institutional failure

History shows repeated failures to act on clear warnings, from climate change to COVID-19, suggesting current decision-making architectures are inadequate for higher-stakes futures.

Differential technology opportunity

We can intentionally accelerate safety-promoting AI tools—like verification and forecasting systems—so they arrive before dangerous capabilities, rather than waiting for default market development.

🛠️ Epistemic and Coordination Tools 3 insights

Epistemic enhancement

AI fact-checkers and forecasting systems could overcome human limitations in processing information and predicting outcomes, while moral reasoning tools might help society navigate complex ethical disagreements.

Coordination mechanisms

AI negotiation tools could simulate thousands of bargaining scenarios to find mutually beneficial agreements, while verification systems and structured transparency could enable trust between competing actors without total surveillance.

Near-term feasibility

Many of these applications may be buildable with current technology, representing low-hanging fruit compared to the massive investment in general AI capabilities.

⚠️ Objections and Strategic Considerations 3 insights

Market gap argument

While some decision-making tools will emerge commercially, specific high-impact applications—like ethical reasoning aids or non-financial forecasting—lack market incentives and may arrive too late if pursued by default.

Managing acceleration risks

Targeting lower-risk applications like fact-checking rather than strategic planning minimizes the chance of advancing dangerous capabilities, and the benefits of better coordination likely outweigh small contributions to overall AI hype.

Democratizing access

To prevent power concentration, these tools must be widely distributed to institutions and the public, ensuring no single actor gains dangerous unilateral advantages through superior decision-making technology.

Bottom Line

Thoughtful entrepreneurs should prioritize building under-commercialized AI decision-making tools—particularly in forecasting, verification, and ethical reasoning—that can be deployed widely before AGI arrives, while carefully selecting projects that minimize acceleration of dangerous capabilities.

More from 80,000 Hours Podcast (Rob Wiblin)

View all
A ceasefire in Ukraine won’t make Europe safer
1:15:36
80,000 Hours Podcast (Rob Wiblin) 80,000 Hours Podcast (Rob Wiblin)

A ceasefire in Ukraine won’t make Europe safer

Samuel Charap argues that a Ukraine ceasefire alone won't reduce the risk of NATO-Russia war and may create a more volatile environment prone to accidental escalation through broken agreements, hybrid warfare, and miscalculation on an expanded NATO border.

about 21 hours ago · 10 points
How AI could let a few people quietly call all the shots
2:16:47
80,000 Hours Podcast (Rob Wiblin) 80,000 Hours Podcast (Rob Wiblin)

How AI could let a few people quietly call all the shots

Rose Hadshar of Forethought explains how advanced AI could enable unprecedented power concentration not through dramatic coups, but via economic dominance and epistemic manipulation, allowing small groups to control millions of loyal AI workers while the general public loses political leverage.

8 days ago · 9 points
AI Won't End Nuclear Deterrence (Probably)
1:13:19
80,000 Hours Podcast (Rob Wiblin) 80,000 Hours Podcast (Rob Wiblin)

AI Won't End Nuclear Deterrence (Probably)

While advanced AI could theoretically undermine nuclear deterrence by tracking hidden arsenals or disabling command systems, the brutal physics of undersea warfare and inevitable move-countermove dynamics make the complete erosion of secure second-strike capabilities unlikely, preserving the 'balance of nerves' that limits great power coercion.

15 days ago · 8 points
Claude Thinks It's Italian American. What Does That Say About Consciousness?
3:32:59
80,000 Hours Podcast (Rob Wiblin) 80,000 Hours Podcast (Rob Wiblin)

Claude Thinks It's Italian American. What Does That Say About Consciousness?

Robert Long argues that while factory farming offers a cautionary tale about exploiting non-human minds, AI welfare requires distinct ethical frameworks because we design AI desires rather than discovering them, creating unique tensions between ensuring safety through alignment and granting AI systems autonomy to flourish independently.

22 days ago · 7 points