AI, Cyber & Systemic Risk: Securing the Digital Frontline
TL;DR
Nicole Perlroth explains how AI is collapsing the barrier to entry for sophisticated cyberattacks by automating zero-day discovery and ransomware operations, while warning that startups recklessly adopting AI coding tools are expanding attack surfaces with insecure code that fails basic security standards.
🎯 The AI Attack Revolution 3 insights
Zero-day discovery accelerated to sub-second speeds
AI has reduced the time to discover and exploit zero-day vulnerabilities from months or years to sub-second speeds for some attack vectors, democratizing capabilities previously limited to elite government agencies like the NSA or Israel's Unit 8200.
Fully automated ransomware kill chains
Attackers now use LLMs to automate the entire ransomware process—from identifying critical business assets and encryption strategies to conducting payment negotiations via AI chatbots trained for psychological pressure.
State-sponsored exploit market remains lucrative
Saudi Arabia currently pays up to $10 million for high-quality iOS zero-day exploits, but AI tools like Expo are now topping hacker leaderboards by finding vulnerabilities faster than human experts.
🛡️ Defense Playing Catch-Up 3 insights
AI-powered continuous monitoring
New defensive tools use AI agents to provide 24/7 surveillance of security gaps, automate patching, and triage alerts across time zones, preventing incidents like the Target breach where a critical alert was missed between time zones.
Automated third-party risk assessment
AI agents now conduct continuous third-party vendor security assessments against NIST standards rather than annual paperwork compliance checklists, addressing critical labor shortages in cybersecurity.
Offense maintains first-mover advantage
Despite defensive innovations in deepfake detection and automated patching, attackers currently retain the advantage across all vectors including social engineering and automated vulnerability scanning.
⚠️ The Founder Security Crisis 3 insights
AI-generated code fails security standards
A Veracode study found LLM-generated code scored only 55 out of 100—an F grade—on secure coding standards, yet founders increasingly rely on 'vibe coding' without security review.
Attack surface expands exponentially
Every line of AI-generated code widens potential attack surfaces, with bad actors now capable of discovering and exploiting vulnerabilities in sub-second timeframes using automated scanning tools.
Security basics are non-negotiable
Founders must implement multifactor authentication, anomalous behavior monitoring, and secure coding practices regardless of speed-to-market pressures, as AI eliminates margin for human error in defense.
Bottom Line
As AI eliminates technical barriers for attackers while producing insecure code at scale, founders must treat security hygiene—secure coding reviews, MFA, and continuous monitoring—as existential priorities rather than afterthoughts, because automated exploitation now happens faster than human response times.
More from My First Million
View all
Power and Accountability: The Costs and Benefits of Speaking Up
Former Deutsche Bank risk manager Eric Ben-Artzi and ex-Kleiner Perkins partner Ellen Pao share their experiences exposing accounting fraud and gender discrimination, revealing how institutional power structures in law and media often inflict greater costs on whistleblowers than the original misconduct.
A Conversation with Eric Horvitz, Chief Scientific Officer, Microsoft
Microsoft Chief Scientific Officer Eric Horvitz frames AI as a general-purpose technology comparable to steam and electricity, predicting historians will view this period as a distinct civilizational epoch. He argues that realizing AI's potential requires focusing on human-AI collaboration at the 'edge of doability' while preserving human agency through deep domain expertise and interdisciplinary leadership.
Jane Fraser, CEO of Citi: Lead with Empathy
Citigroup CEO Jane Fraser shares how an Australian all-girls education instilled the courage to 'go for it,' why she worked part-time as a McKinsey partner, and the leadership principles—clinical decision-making, transformative courage, and empathetic trust-building—that guided her through crisis turnarounds to become the first female CEO of a Big Four U.S. bank.
Power and Politics in Banking Today
Stanford GSB professors Anat Admati and Amit Seru examine the Federal Reserve's evolution from a narrow monetary authority into an interventionist economic powerhouse, warning that mission creep, regulatory failures, and blurred lines between liquidity and solvency crises now threaten the central bank's independence and credibility as Kevin Warsh potentially takes leadership.