Gordon Bell Winner: Forecasting Tsunamis in Real Time With Digital Twins | NVIDIA GTC
TL;DR
UT Austin researchers demonstrate a real-time tsunami forecasting system using physics-based digital twins and Bayesian inversion, winning the Gordon Bell Prize by reducing computational time for billion-parameter inverse problems from decades to milliseconds through novel GPU-accelerated algorithms.
🌊 The Cascadia Subduction Zone Threat 3 insights
Overdue magnitude 9 mega-thrust earthquake
The Cascadia subduction zone, stretching from Northern California to British Columbia, has produced 43 earthquakes over the last 10,000 years and is currently overdue for a rupture capable of reaching magnitude 9.
30-meter tsunamis with 15-minute arrival time
When the locked tectonic plates slip, the resulting uplift would generate tsunamis up to 30 meters high that would inundate the Pacific Northwest coast within just 15 minutes.
Sub-minute warning requirement
To provide actionable evacuation warnings, the digital twin must forecast wave impacts and heights in under one minute using real-time data from seafloor acoustic pressure sensors.
🧮 The Computational Challenge 3 insights
Billion-parameter Bayesian inversion
The inverse problem requires solving for a spatio-temporal seafloor motion field discretized into approximately one billion parameters using sparse measurements from only 600 sensors.
Standard methods require 50 years of computation
Conventional state-of-the-art algorithms would need roughly 250,000 forward wave propagation runs, taking an estimated 50 years on 512 A100 GPUs to solve the problem.
Neural network surrogates fail
The hyperbolic wave equations carry information without dissipation, creating an intrinsically high-dimensional problem that violates the manifold hypothesis underlying AI-based surrogates.
⚡ Algorithmic Breakthrough 3 insights
Time-shift invariance enables Toeplitz structure
Exploiting the autonomous nature of the wave physics, the parameter-to-observable map forms a block Toeplitz matrix where each column is a time-shifted version of the first, enabling FFT-based diagonalization.
FFT acceleration reduces solve time to milliseconds
By replacing expensive wave equation solves with FFTs and dense linear algebra optimized for GPUs, the team reduced computation from one hour to mere milliseconds on the same hardware.
Open-source FFTMadVac implementation
The solution is available as an open-source CUDA library built on NVIDIA's cuBLAS and cuFFT, enabling real-time Hessian-vector products for similar extreme-scale inverse problems.
Bottom Line
By exploiting the time-shift invariance of wave physics to create a block Toeplitz matrix structure solvable via FFTs on GPUs, researchers can now perform billion-parameter Bayesian tsunami forecasting in real-time, enabling life-saving sub-minute early warnings for coastal communities.
More from NVIDIA AI Podcast
View all
Building Towards Self-Driving Codebases with Long-Running, Asynchronous Agents
Cursor co-founder Aman traces AI coding's evolution from autocomplete to synchronous agents, outlining the shift toward long-running async cloud agents that use multi-agent architectures to overcome context limits, and predicting a future of self-driving codebases with self-healing systems and minimal human intervention.
Accelerate AI through Open Source Inference | NVIDIA GTC
Industry leaders from NVIDIA, Hugging Face, Mistral AI, Black Forest Labs, and Lightricks discuss how open-source inference optimization—spanning quantization, latent compression, and Mixture of Experts architectures—is enabling both massive trillion-parameter models and efficient edge deployment while driving the shift toward sovereign AI and local data control.
Reinforcement Learning at Scale: Engineering the Next Generation of Intelligence
Former OpenAI researchers now leading frontier startups explain how reinforcement learning has evolved from game-playing agents to powering enterprise automation and scientific discovery, requiring new scaling paradigms focused on inference compute and long-horizon reasoning rather than just pre-training FLOPs.
Teach AI to Code in Every Language with NVIDIA NeMo | NVIDIA GTC
NVIDIA researchers demonstrate training a multilingual code generation model from scratch using 43x less data than typical foundation models, achieving 38.87% accuracy on HumanEval+ while supporting English/Spanish and Python/Rust through efficient data curation and checkpoint merging.