Stanford CS547 HCI Seminar | Winter 2026 | What's Up with AI?
TL;DR
Veteran AI researcher Terry Winograd argues that rather than focusing on apocalyptic futures or utopian promises, we should recognize AI as an accelerant of existing social problems—particularly in employment, resource allocation, and information integrity—that demand immediate societal attention.
🔁 Historical Parallels and Hype Cycles 2 insights
Current debates echo decades-old predictions
Winograd notes that both utopian visions like Dario Amodei's 'Machines of Loving Grace' and doomer predictions such as Marvin Minsky's 1970 forecast of superintelligence within 8 years mirror historical patterns where society ignores long-term side effects until they become crises, much like early automobile enthusiasts dismissed concerns about pollution.
AI acts as a social accelerant
Rather than creating novel problems, contemporary AI amplifies pre-existing tensions around labor, resources, and truth, making the social context of technological deployment more critical than the technical capabilities themselves.
💼 Labor Market and Economic Disruption 3 insights
White-collar automation is immediate
Citing recent Stanford graduate employment data and Anthropic's Dario Amodei, Winograd warns that AI is rapidly eliminating entry-level software engineering positions, not just routine data entry, fundamentally altering career paths for current students.
Inequality over efficiency
Quoting Geoffrey Hinton, Winograd emphasizes that AI deployment primarily benefits wealthy owners through workforce replacement and profit concentration, while creating massive unemployment without addressing the social and psychological functions of work.
Political backlash is emerging
Data center construction has become a rallying cry for 2026 political candidates, signaling voter anger about AI's resource consumption and economic displacement that transcends technical debates about energy efficiency.
🎭 Erosion of Information Integrity 3 insights
Fakes become frictionless
While manipulated media existed before AI—such as a 2004 doctored photo of John Kerry—modern tools make synthetic content so accessible that 'AI slop,' or low-quality algorithmic misinformation, threatens to collapse shared reality and trust in visual evidence.
The business model of unreality
Winograd cites industry insiders admitting that releasing deepfake tools constitutes the 'right business decision' despite societal harm, using competitive pressure ('if we don't, China will') to justify the erosion of epistemic trust.
Algorithmic authority replaces judgment
He shares a personal example where Stanford's automated systems falsely listed him in a maternal health institute due to grant association algorithms, illustrating how AI systems generate authoritative-sounding but nonsensical information without human oversight.
Bottom Line
We must shift focus from speculative superintelligence to regulating AI's immediate amplification of inequality, unemployment, and misinformation before these accelerated crises destabilize democratic and economic institutions.
More from Stanford Online
View all
Stanford CS153 Frontier Systems | Nikhyl Singhal from Skip on Product Management in the AI Era
Nikhyl Singhal argues that product management is evolving from manual information gathering to AI-augmented strategic judgment, requiring PMs to focus on solving genuine customer problems while leveraging AI's ability to synthesize vast customer data streams.
Stanford CS153 Frontier Systems | Amit Jain from Luma AI on Unified Intelligence Systems
Amit Jain details Luma AI's evolution from 3D capture to video generation, revealing how the company learned to build scalable world simulators by designing algorithms around data physics rather than theoretical ideals, ultimately converging on unified intelligence systems that combine language, video, and reasoning.
Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence
Andreas Blattmann, co-founder of Black Forest Labs and co-creator of Stable Diffusion, argues that visual intelligence represents the critical next frontier for AI, requiring a fundamental shift from text-centric unimodal models to multimodal systems trained on 'natural representations' (video, audio, physics) to unlock true reasoning, robotics capabilities, and higher intelligence.
Stanford CS153 Frontier Systems | Mati Staniszewski from ElevenLabs on The Future of Voice Systems
ElevenLabs CEO Mati Staniszewski explains how the company pivoted from an AI dubbing vision to perfecting text-to-speech by staying close to Discord communities, leveraging open-source research, and running lean to solve the 'one voice' dubbing problem he experienced growing up in Poland.