🤖 AI & Machine Learning

Artificial intelligence, machine learning, and data science

Search
This picture broke my brain
44:52
3Blue1Brown 3Blue1Brown

This picture broke my brain

This video unpacks M.C. Escher's "Print Gallery" lithograph, revealing how its paradoxical infinite loop relies on a conformal grid derived from complex analysis to transform a linear Droste effect into a continuous circular zoom, mathematically resolving the mysterious blank center.

3 days ago · 9 points
The most beautiful formula not enough people understand
1:00:24
3Blue1Brown 3Blue1Brown

The most beautiful formula not enough people understand

Grant Sanderson demonstrates why high-dimensional geometry—essential for modern AI—defies human intuition through counterintuitive sphere packing puzzles, revealing that high-dimensional cubes (not spheres) behave bizarrely as their corners stretch to distance √n while edges remain fixed, ultimately building toward the elegant but underappreciated formula for the volume of n-dimensional balls.

26 days ago · 9 points
The Hairy Ball Theorem
29:40
3Blue1Brown 3Blue1Brown

The Hairy Ball Theorem

The Hairy Ball Theorem establishes that every continuous tangent vector field on a sphere must contain at least one zero vector, creating unavoidable constraints in systems ranging from video game physics to meteorology.

about 2 months ago · 10 points
How AI works in Super Simple Terms!!!
22:51
StatQuest with Josh Starmer StatQuest with Josh Starmer

How AI works in Super Simple Terms!!!

AI fundamentally works by converting text prompts into numerical coordinates and processing them through massive mathematical equations with trillions of parameters to predict the next word, requiring extensive training on internet-scale data followed by targeted alignment to produce useful responses.

2 months ago · 7 points
Traditional X-Mas Stream
2:33:37
Yannic Kilcher Yannic Kilcher

Traditional X-Mas Stream

While streaming Minecraft gameplay, ML researcher Yannic Kilcher discusses how recursive self-improvement in AI faces practical exploration limits similar to reinforcement learning, and notes the field's shift from fundamental research to market-driven product development focused on coding and image generation applications.

3 months ago · 6 points
TiDAR: Think in Diffusion, Talk in Autoregression (Paper Analysis)
47:02
Yannic Kilcher Yannic Kilcher

TiDAR: Think in Diffusion, Talk in Autoregression (Paper Analysis)

TiDAR accelerates autoregressive LLM inference by utilizing idle GPU capacity during memory-bound phases to pre-draft future tokens via diffusion, then verifying them through autoregressive rejection sampling to maintain exact output quality without auxiliary model overhead.

3 months ago · 10 points
Titans: Learning to Memorize at Test Time (Paper Analysis)
32:31
Yannic Kilcher Yannic Kilcher

Titans: Learning to Memorize at Test Time (Paper Analysis)

This analysis of Google's Titans paper explores an architecture that extends context windows by using a 2-layer MLP as a neural memory module that learns to compress and retrieve long-range information at test time, though the reviewer notes it reinvents some existing linear attention concepts while offering genuine innovation in adaptive memory.

3 months ago · 7 points
Why Laplace transforms are so useful
23:05
3Blue1Brown 3Blue1Brown

Why Laplace transforms are so useful

Laplace transforms convert differential equations into algebraic expressions on the complex s-plane, enabling analysis of dynamic systems—such as driven harmonic oscillators—by examining pole locations to distinguish transient decay from steady-state behavior without solving full time-domain equations.

5 months ago · 9 points
[Paper Analysis] The Free Transformer (and some Variational Autoencoder stuff)
40:10
Yannic Kilcher Yannic Kilcher

[Paper Analysis] The Free Transformer (and some Variational Autoencoder stuff)

The Free Transformer extends decoder architectures by introducing latent variables at the start of generation to capture global sequence decisions (like sentiment), replacing the implicit inference required by standard token-level sampling with explicit conditioning that simplifies learning and improves coherence.

5 months ago · 8 points
But what is a Laplace Transform?
34:41
3Blue1Brown 3Blue1Brown

But what is a Laplace Transform?

The Laplace transform decomposes functions into their constituent exponential components by integrating f(t)*e^(-st) from zero to infinity; when the complex frequency s matches an exponential hidden within f(t), the integrand becomes constant causing the integral to diverge into a pole, simultaneously converting differential equations into algebraic problems by transforming derivatives into multiplications by s.

5 months ago · 7 points
Reinforcement Learning with Neural Networks: Mathematical Details
25:01
StatQuest with Josh Starmer StatQuest with Josh Starmer

Reinforcement Learning with Neural Networks: Mathematical Details

This video provides a step-by-step mathematical walkthrough of policy gradient reinforcement learning, demonstrating how to derive gradients via the chain rule and use binary reward signals (+1/-1) to correct update directions when training neural networks without labeled data.

12 months ago · 6 points