Stanford CS547 HCI Seminar | Winter 2026 | What's Up with AI?
TL;DR
Veteran AI researcher Terry Winograd argues that rather than focusing on apocalyptic futures or utopian promises, we should recognize AI as an accelerant of existing social problems—particularly in employment, resource allocation, and information integrity—that demand immediate societal attention.
🔁 Historical Parallels and Hype Cycles 2 insights
Current debates echo decades-old predictions
Winograd notes that both utopian visions like Dario Amodei's 'Machines of Loving Grace' and doomer predictions such as Marvin Minsky's 1970 forecast of superintelligence within 8 years mirror historical patterns where society ignores long-term side effects until they become crises, much like early automobile enthusiasts dismissed concerns about pollution.
AI acts as a social accelerant
Rather than creating novel problems, contemporary AI amplifies pre-existing tensions around labor, resources, and truth, making the social context of technological deployment more critical than the technical capabilities themselves.
💼 Labor Market and Economic Disruption 3 insights
White-collar automation is immediate
Citing recent Stanford graduate employment data and Anthropic's Dario Amodei, Winograd warns that AI is rapidly eliminating entry-level software engineering positions, not just routine data entry, fundamentally altering career paths for current students.
Inequality over efficiency
Quoting Geoffrey Hinton, Winograd emphasizes that AI deployment primarily benefits wealthy owners through workforce replacement and profit concentration, while creating massive unemployment without addressing the social and psychological functions of work.
Political backlash is emerging
Data center construction has become a rallying cry for 2026 political candidates, signaling voter anger about AI's resource consumption and economic displacement that transcends technical debates about energy efficiency.
🎭 Erosion of Information Integrity 3 insights
Fakes become frictionless
While manipulated media existed before AI—such as a 2004 doctored photo of John Kerry—modern tools make synthetic content so accessible that 'AI slop,' or low-quality algorithmic misinformation, threatens to collapse shared reality and trust in visual evidence.
The business model of unreality
Winograd cites industry insiders admitting that releasing deepfake tools constitutes the 'right business decision' despite societal harm, using competitive pressure ('if we don't, China will') to justify the erosion of epistemic trust.
Algorithmic authority replaces judgment
He shares a personal example where Stanford's automated systems falsely listed him in a maternal health institute due to grant association algorithms, illustrating how AI systems generate authoritative-sounding but nonsensical information without human oversight.
Bottom Line
We must shift focus from speculative superintelligence to regulating AI's immediate amplification of inequality, unemployment, and misinformation before these accelerated crises destabilize democratic and economic institutions.
More from Stanford Online
View all
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.
Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains
This lecture examines AI's economic impact through the lens of supply chains and organizational strategy, demonstrating why understanding compute monopolies, labor market shifts, and corporate decision-making is as critical as tracking algorithmic capabilities.
Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society
This lecture argues that AI developers bear unique ethical responsibility for societal outcomes, framing AI as a dual-use technology that requires active steering toward beneficial applications while preventing misuse and accidental harms through rigorous auditing and an ecosystem-aware approach.
Stanford CS221 | Autumn 2025 | Lecture 17: Language Models
This lecture introduces modern language models as industrial-scale systems requiring millions of dollars and trillions of tokens to train, explaining their fundamental operation as auto-regressive next-token predictors that encode language structure through massive statistical modeling.