Explore

Browse expert video summaries, filter by category, or search for topics.

Videos Channels Newsletter
WHY WE DIE: Author Livestream and Q&A with Venki Ramakrishnan #SciFriBookClub
55:16
MIT Technology Review MIT Technology Review

WHY WE DIE: Author Livestream and Q&A with Venki Ramakrishnan #SciFriBookClub

Nobel laureate Venki Ramakrishnan argues that while molecular biology has revealed specific mechanisms driving aging, extending maximum human lifespan remains uncertain and raises profound ethical concerns about inequality; the priority should be extending healthspan rather than pursuing radical longevity.

4 months ago · 9 points
Why Apple Just Gave Up on AI
13:10
ColdFusion ColdFusion

Why Apple Just Gave Up on AI

Apple is paying Google $1 billion annually to power Siri with a custom Gemini model after years of embarrassing delays and internal dysfunction, raising serious questions about whether massive investments in proprietary AI infrastructure are necessary when companies can simply lease commoditized large language models.

5 months ago · 10 points
Are We Really Ready for AI Coding?
21:55
ColdFusion ColdFusion

Are We Really Ready for AI Coding?

Vibe coding—building software through natural language prompts rather than manual programming—has sparked a multi-billion dollar industry enabling non-developers to launch apps in minutes, yet faces severe growing pains including catastrophic AI errors, unpredictable outputs, and a brewing crisis in developer satisfaction and economic sustainability.

5 months ago · 10 points
Why Laplace transforms are so useful
23:05
3Blue1Brown 3Blue1Brown

Why Laplace transforms are so useful

Laplace transforms convert differential equations into algebraic expressions on the complex s-plane, enabling analysis of dynamic systems—such as driven harmonic oscillators—by examining pole locations to distinguish transient decay from steady-state behavior without solving full time-domain equations.

5 months ago · 9 points
[Paper Analysis] The Free Transformer (and some Variational Autoencoder stuff)
40:10
Yannic Kilcher Yannic Kilcher

[Paper Analysis] The Free Transformer (and some Variational Autoencoder stuff)

The Free Transformer extends decoder architectures by introducing latent variables at the start of generation to capture global sequence decisions (like sentiment), replacing the implicit inference required by standard token-level sampling with explicit conditioning that simplifies learning and improves coherence.

5 months ago · 8 points
Is the Artificial Intelligence Bubble About to Pop? | Ars Live
55:31
Ars Technica Ars Technica

Is the Artificial Intelligence Bubble About to Pop? | Ars Live

Tech critic Ed Zitron argues the generative AI industry is an unsustainable bubble propped up by mythology rather than economics, with roughly $50 billion in annual revenue failing to justify trillion-dollar valuations as companies hemorrhage cash on unpredictable inference costs and unproven technology.

5 months ago · 9 points
Julia Shaw: Criminal Psychology of Murder, Serial Killers, Memory & Sex | Lex Fridman Podcast #483
2:42:07
Lex Fridman Podcast Lex Fridman Podcast

Julia Shaw: Criminal Psychology of Murder, Serial Killers, Memory & Sex | Lex Fridman Podcast #483

Criminal psychologist Julia Shaw argues that 'evil' is not a binary category but a spectrum of traits present in everyone, emphasizing that understanding the psychological and environmental mechanisms behind violent behavior—including dehumanization and rationalization—is essential for preventing future crimes rather than simply condemning perpetrators.

5 months ago · 10 points
Coding Challenge 187: Bayes Theorem
53:38
The Coding Train The Coding Train

Coding Challenge 187: Bayes Theorem

The Coding Train demonstrates how to implement a Naive Bayes text classifier in JavaScript from scratch, using a concrete library book probability example to explain Bayes Theorem before coding a lightweight, browser-based word-frequency classification system.

6 months ago · 9 points