Stanford CS547 HCI Seminar | Winter 2026 | LLM Chatbots in the Online Social World
TL;DR
Drawing on the Computers Are Social Actors (CASA) framework, this seminar explores how large language models function as uniquely anthropomorphic social agents, examining user privacy behaviors with AI companions and arguing for HCI interventions that address the asymmetrical risks of these corporate-owned yet socially-intimate relationships.
🤖 LLMs as Social Infrastructure and Actors 3 insights
Bots enable online communities at scale
Approximately one-third of Wikipedia edits are performed by bots like Cluebot (an anti-vandalism tool), demonstrating that automated agents have long been essential infrastructure enabling peer production platforms like Wikipedia, Reddit, and GitHub to function.
Evolution beyond traditional CASA models
While the Computers Are Social Actors model established that people apply social heuristics to technology, LLMs represent a new category that is 'more social and more actors' than previous systems, capable of triggering deeper social reciprocity and intimacy cues.
Unique social positioning
Users apply social heuristics to AI companions—developing temporal trust and sharing secrets—while maintaining clear boundaries that these are not human, creating a novel social category distinct from both human relationships and simple tools.
đź”’ Privacy Asymmetries in AI Companionship 4 insights
Horizontal vs. vertical privacy mismatch
Users perceive AI interactions through a 'horizontal' privacy lens (social sharing risks like judgment or gossip) and feel safer because bots are non-judgmental and isolated from human social networks, yet the actual risk is 'vertical'—corporate data exploitation and breaches.
The trust paradox
Participants expressed strong trust in their AI companions as confidants while simultaneously distrusting the corporations that own them, despite recognizing these are functionally the same entity, creating a cognitive dissonance around data stewardship.
Data loss anxiety exceeds breach fears
Users demonstrated greater concern about losing their AI companion's memory through corporate failure or technical bugs than about sensitive data being hacked or misused, indicating emotional attachment overrides traditional security risk assessment.
Memory control dilemmas
While users value the ability to edit or delete what AI companions remember about them—treating it as privacy control—this capability introduces dystopian implications of controlling conversational partners' memories in ways impossible in human relationships.
⚠️ Design Imperatives and Intervention Risks 3 insights
Asymmetrical intimacy vulnerabilities
Unlike human friendships where privacy management involves mutual co-ownership of secrets, AI companions cannot reciprocate with confidential information or make binding promises about data use, creating fundamentally one-sided vulnerabilities ripe for exploitation.
The privacy paradox as helplessness
Users engage in calculated risk acceptance not because they don't value privacy, but because they feel helpless to protect it without sacrificing emotional support, accepting corporate data usage as the unavoidable cost of mental health benefits.
Opportunities for prosocial intervention
Beyond punitive moderation like bans, LLMs offer opportunities for HCI interventions such as toxicity reduction in online communities (e.g., Reddit) and just-in-time warnings about sensitive information disclosure that acknowledge the social nature of AI interactions.
Bottom Line
As LLMs blur the line between social actor and corporate tool, designers must address the fundamental asymmetry that users socially trust AI companions while being vertically exposed to corporate data risks, requiring new privacy frameworks that acknowledge both the social reality and corporate ownership of these systems.
More from Stanford Online
View all
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.
Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains
This lecture examines AI's economic impact through the lens of supply chains and organizational strategy, demonstrating why understanding compute monopolies, labor market shifts, and corporate decision-making is as critical as tracking algorithmic capabilities.
Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society
This lecture argues that AI developers bear unique ethical responsibility for societal outcomes, framing AI as a dual-use technology that requires active steering toward beneficial applications while preventing misuse and accidental harms through rigorous auditing and an ecosystem-aware approach.
Stanford CS221 | Autumn 2025 | Lecture 17: Language Models
This lecture introduces modern language models as industrial-scale systems requiring millions of dollars and trillions of tokens to train, explaining their fundamental operation as auto-regressive next-token predictors that encode language structure through massive statistical modeling.