Eggheads working on El Capitan, the world’s most powerful publicly known supercomputer, have developed a new tsunami forecasting system that could dramatically improve response times in coastal communities.
In the event of a large-scale earthquake along the US Pacific Northwest, residents may have as little as 10 minutes before the waves strike, leaving little time for evacuation, researchers explained in a blog post on Tuesday.
“Conventional tsunami warning systems often rely on seismic and geodetic data to infer earthquake magnitude and location, but typically use simplistic models that fail to capture the complexity of fault ruptures, which can lead to false alarms or dangerously late warnings,” the authors explained.
The new forecast model, developed as part of collaboration between Lawrence Livermore National Lab, the Oden Institute at the University of Texas at Austin, and the Scripps Institution of Oceanography at the University of California San Diego, sought to improve on existing tsunami forecast models enabling real-time tracking of conditions, providing residents more time to get to high ground.
“This framework represents a paradigm shift in how we think about early warning systems,” Omar Ghattas, professor of mechanical engineering and principal faculty in the Oden Institute, explained in the post. “For the first time, we can combine real-time sensor data with full-physics modeling and uncertainty quantification – fast enough to make decisions before a tsunami reaches the shore.”
This involved generating a massive library of physics data linking earthquake-related seafloor motion to tsunamis. And to do it, researchers employed nearly every FLOP of compute the El Capitan supercomputer could muster to solve what’s known as a Bayesian inversion problem.
Built by HPE’s Cray division for the Department of Energy (DoE) and National Nuclear Security Administration (NNSA), El Capitan came online last year and features more than 43,500 of AMD’s MI300A accelerated processing units (APUs), totaling 2.79 exaFLOPS of peak performance. In late 2024, the machine claimed the No. 1 spot on the Top500 ranking of publicly known supers with 1.74 exaFLOPS in the LINPACK benchmark.
The machine’s ultimate purpose isn’t public research, but rather ensuring that the US nuclear weapons stockpile actually works, since we can’t exactly set one off to find out.
However, before the system is air-gapped and starts its primary mission, researchers had the opportunity to put the machine through its paces, running all manner of scientific workloads, including the extreme-scale acoustic gravity wave propagation problems necessary to forecast tsunamis in real-time.
By front-loading the computation onto El Capitan, the labs were able to develop a digital twin that could predict tsunamis in a matter of seconds using much more modest GPU clusters.
“This is the first digital twin with this level of complexity that runs in real time,” LLNL’s Tzanio Kolev, who co-authored the paper, said in the post.
While the model can drastically reduce response times in the event of a tsunami, it relies on streaming data from pressure sensors located along the seafloor. For forecast systems based on the model to be viable, these networks will need to grow, particularly in earthquake-prone regions.
However, as Kolev notes, the Bayesian inversion framework used here isn’t limited to tsunami prediction and can be applied to a number of complex systems, from wildfire tracking to space weather. ®