Stanford CS547 HCI Seminar | Winter 2026 | LLM Chatbots in the Online Social World

| Podcasts | January 15, 2026 | 4.95 Thousand views | 57:15

TL;DR

Drawing on the Computers Are Social Actors (CASA) framework, this seminar explores how large language models function as uniquely anthropomorphic social agents, examining user privacy behaviors with AI companions and arguing for HCI interventions that address the asymmetrical risks of these corporate-owned yet socially-intimate relationships.

🤖 LLMs as Social Infrastructure and Actors 3 insights

Bots enable online communities at scale

Approximately one-third of Wikipedia edits are performed by bots like Cluebot (an anti-vandalism tool), demonstrating that automated agents have long been essential infrastructure enabling peer production platforms like Wikipedia, Reddit, and GitHub to function.

Evolution beyond traditional CASA models

While the Computers Are Social Actors model established that people apply social heuristics to technology, LLMs represent a new category that is 'more social and more actors' than previous systems, capable of triggering deeper social reciprocity and intimacy cues.

Unique social positioning

Users apply social heuristics to AI companions—developing temporal trust and sharing secrets—while maintaining clear boundaries that these are not human, creating a novel social category distinct from both human relationships and simple tools.

đź”’ Privacy Asymmetries in AI Companionship 4 insights

Horizontal vs. vertical privacy mismatch

Users perceive AI interactions through a 'horizontal' privacy lens (social sharing risks like judgment or gossip) and feel safer because bots are non-judgmental and isolated from human social networks, yet the actual risk is 'vertical'—corporate data exploitation and breaches.

The trust paradox

Participants expressed strong trust in their AI companions as confidants while simultaneously distrusting the corporations that own them, despite recognizing these are functionally the same entity, creating a cognitive dissonance around data stewardship.

Data loss anxiety exceeds breach fears

Users demonstrated greater concern about losing their AI companion's memory through corporate failure or technical bugs than about sensitive data being hacked or misused, indicating emotional attachment overrides traditional security risk assessment.

Memory control dilemmas

While users value the ability to edit or delete what AI companions remember about them—treating it as privacy control—this capability introduces dystopian implications of controlling conversational partners' memories in ways impossible in human relationships.

⚠️ Design Imperatives and Intervention Risks 3 insights

Asymmetrical intimacy vulnerabilities

Unlike human friendships where privacy management involves mutual co-ownership of secrets, AI companions cannot reciprocate with confidential information or make binding promises about data use, creating fundamentally one-sided vulnerabilities ripe for exploitation.

The privacy paradox as helplessness

Users engage in calculated risk acceptance not because they don't value privacy, but because they feel helpless to protect it without sacrificing emotional support, accepting corporate data usage as the unavoidable cost of mental health benefits.

Opportunities for prosocial intervention

Beyond punitive moderation like bans, LLMs offer opportunities for HCI interventions such as toxicity reduction in online communities (e.g., Reddit) and just-in-time warnings about sensitive information disclosure that acknowledge the social nature of AI interactions.

Bottom Line

As LLMs blur the line between social actor and corporate tool, designers must address the fundamental asymmetry that users socially trust AI companions while being vertically exposed to corporate data risks, requiring new privacy frameworks that acknowledge both the social reality and corporate ownership of these systems.

More from Stanford Online

View all
Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence
1:01:14
Stanford Online Stanford Online

Stanford CS153 Frontier Systems | Andreas Blattmann from Black Forest Labs on Visual Intelligence

Andreas Blattmann, co-founder of Black Forest Labs and co-creator of Stable Diffusion, argues that visual intelligence represents the critical next frontier for AI, requiring a fundamental shift from text-centric unimodal models to multimodal systems trained on 'natural representations' (video, audio, physics) to unlock true reasoning, robotics capabilities, and higher intelligence.

5 days ago · 9 points