Stanford CS547 HCI Seminar | Winter 2026 | Does GenAI Work in Education?

| Podcasts | February 13, 2026 | 5.39 Thousand views | 56:54

TL;DR

This seminar argues that GenAI's effectiveness in education hinges on 'knowledge engineering'—the systematic mapping of expert cognitive processes—to ensure high-fidelity, personalized feedback. A randomized trial demonstrates that TAs using AI suggestions based on detailed reasoning rubrics produced significantly better student learning outcomes than human-only feedback.

📊 The Mixed Reality of GenAI in Education 2 insights

Widespread adoption meets uncertain outcomes

While 70-85% of college students use GenAI for homework and hundreds of EdTech startups have emerged, studies show AI use can reduce learning retention when removed and decrease brain activity during writing tasks.

Variable quality in AI tutoring

Research indicates 30-50% of AI-generated tutoring hints contain quality issues, though carefully designed systems have outperformed active learning in college physics and improved K-12 math tutoring when providing real-time suggestions.

🧠 Knowledge Engineering & Cognitive Fidelity 2 insights

Solving Bloom's Two Sigma Challenge

Cognitive tutors from the 1990s achieved two standard deviation gains over conventional instruction by encoding expert production rules, demonstrating that AI must model human reasoning processes rather than just generate answers.

Granular decomposition enables precision

Breaking skills into specific steps and anticipating precise misconceptions—such as distinguishing between common and least common denominators—allows AI to provide targeted feedback and deliberate practice instead of generic hints.

📝 Case Study: Feedback Writer Trial 2 insights

Significant improvement in writing quality

In a randomized trial of 360 Econ 101 students, AI-mediated feedback produced revised essays with an effect size of 0.5, equivalent to moving a student from the 50th to 70th percentile compared to human-only feedback.

High-fidelity rubrics drive success

The system used checklists representing expert reasoning steps—such as identifying specific decision-makers harmed by externalities—to generate concrete hints without revealing answers, with AI judges assessing quality at 85% expert accuracy.

Bottom Line

To make GenAI work in education, developers must engineer detailed cognitive models that externalize expert reasoning into high-fidelity rubrics, enabling targeted feedback on learning processes rather than just evaluating final outputs.

More from Stanford Online

View all
Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion
58:49
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 20: Fireside Chat, Conclusion

Percy Liang reflects on AI's transformation from academic curiosity to global infrastructure, debunking sci-fi misconceptions about capabilities while arguing that academia's role in long-term research and critical evaluation remains essential as the job market shifts away from traditional entry-level software engineering.

16 days ago · 7 points
Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains
1:14:36
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains

This lecture examines AI's economic impact through the lens of supply chains and organizational strategy, demonstrating why understanding compute monopolies, labor market shifts, and corporate decision-making is as critical as tracking algorithmic capabilities.

16 days ago · 7 points
Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society
1:12:10
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 18: AI & Society

This lecture argues that AI developers bear unique ethical responsibility for societal outcomes, framing AI as a dual-use technology that requires active steering toward beneficial applications while preventing misuse and accidental harms through rigorous auditing and an ecosystem-aware approach.

16 days ago · 8 points
Stanford CS221 | Autumn 2025 | Lecture 17: Language Models
1:19:46
Stanford Online Stanford Online

Stanford CS221 | Autumn 2025 | Lecture 17: Language Models

This lecture introduces modern language models as industrial-scale systems requiring millions of dollars and trillions of tokens to train, explaining their fundamental operation as auto-regressive next-token predictors that encode language structure through massive statistical modeling.

16 days ago · 10 points