Stanford CS547 HCI Seminar | Spring 2026 | Observing the User Experience in 2026
TL;DR
Mike Kuniavsky and Elizabeth Goodman examine how AI has revolutionized UX research by automating traditional methods while simultaneously creating an 'authenticity crisis' through synthetic users and widespread fraud, arguing that maintaining 'ground truth' through direct human contact remains essential for valid insights and organizational influence.
🤖 AI Automation of Research Methods 3 insights
Ubiquitous free transcription and analysis
Tasks like transcription, coding, translation, and video editing that required significant manual effort in previous editions are now automated and freely available through AI tools.
Automated drafting of research instruments
Large language models can now produce first drafts of discussion guides, survey questions, and preliminary analysis that previously took researchers hours to create.
Hyperscaled fraud in participant recruitment
The same automation enables bad actors to fake survey responses, user interviews, and participant identities at unprecedented scale using AI-generated personas.
⚠️ The Authenticity Crisis 3 insights
Four quadrants of research reality
Research now exists on axes of real versus fake users and real versus fake researchers, with startups operating in all four quadrants including fully synthetic studies.
Anti-aliased reality from synthetic users
AI personas provide smoothed approximations of user behavior that are guaranteed to be wrong in specific details and miss critical edge cases.
Invisible populations behind digital walls
LLMs lack training data on important professional groups like healthcare policy specialists working behind government firewalls or non-English speaking communities, creating dangerous blind spots.
🎯 Ground Truth and Verification 2 insights
Zero trust protocols for participant screening
Researchers must implement continuous verification through probing questions about current events and specific details, plus delay payment until authenticity is confirmed.
Essential role of ethnographic validation
Traditional in-person research visiting users' homes and observing real contexts remains necessary to validate AI-generated insights and capture stories that drive organizational decisions.
Bottom Line
Use AI tools for research efficiency and automation, but regularly validate findings through direct ethnographic contact with real users to ensure accuracy and maintain organizational credibility.
More from Stanford Online
View all
Stanford CS153 Frontier Systems | Anjney Midha from AMP PBC on Frontier Systems
Anjney Midha frames the current AI landscape as 'the great transition,' where industrial-scale model training meets a complete restructuring of the eight-layer infrastructure stack, while arguing that relationships and obsessions remain the ultimate asymmetric advantages for founders against entrenched incumbents.
Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 9: Scaling Laws
This lecture introduces scaling laws as predictive power-law relationships that enable practitioners to optimize language model training on small budgets and confidently extrapolate performance to million-dollar large-scale runs, while tracing these empirical patterns back to classical machine learning theory and sample complexity research from the 1990s.
Stanford Robotics Seminar ENGR319 | Spring 2026 | Ingredientsfor Long-Horizon Robot Autonomy
A researcher from Physical Intelligence argues that while robots now excel at short, dexterous tasks, true utility requires long-horizon autonomy for complex jobs like cleaning apartments or assembling server racks. The talk introduces MEM (Multiscale Embodied Memory), a system that uses compressed visual and linguistic memory to solve the latency and distribution shift problems that have historically prevented robots from tracking progress over extended time periods.
Stanford CS336 Language Modeling from Scratch | Spring 2026 | Lecture 8: Parallelism
This lecture details how to scale language model training across massive clusters using 4D parallelism, contrasting TPU and GPU networking architectures while addressing the critical memory bottlenecks—particularly optimizer states—that dominate training costs at scale.