America’s Sandia National Labs said this week it will study the use of Cerebras’ wafer-sized accelerator chips to determine whether the nation’s nuclear weapons will work as intended should global annihilation be ever desired.
With support from Lawrence Livermore and Los Alamos National Laboratories, the deployment is overseen by the Department of Energy’s National Nuclear Security Administration (NNSA), which is tasked, among other things, with maintaining the reliability and extending the lifespan of city obliteration warheads through the use of simulation that run on supercomputers. These simulations assure the agency that any changes to the United States nuclear arsenal – such as keeping the physics packages by substituting materials or tweaking the designs – will not unacceptably affect the destructive potential.
Since most of us have agreed to no longer conduct real tests of these devices, simulations using data from subcritical experiments are needed instead. And so, Cerebra’s silicon is being tested to see if it can help here.
“This collaboration with Cerebras Systems has great potential to impact future mission applications by enabling artificial intelligence and machine learning technologies, which are an emerging component of our production simulation workloads,” said Simon Hammond, who serves as federal program manager for computer systems and Software is responsible for the Advanced Simulation and Computing (ASC) team at NNSA.
That’s an interesting mention of AI: Cerebra’s chips are designed to speed up this kind of work, and there’s a lot of interest in using machine learning models to predict the outcome of scientific experiments, as opposed to the classic computational approach of modeling physical interactions. Using AI might be faster than pure computation, although accuracy can be sacrificed and a mix of the two approaches might be best.
Cerebras CS-2 systems feature a large, plate-sized die packed with 2.6 trillion transistors. The startup claims that this oversized “waferscale” chip allows for much faster processing of huge data sets because the information can reside on the processor longer or all of the time, avoiding the shuffling of data in and out of slower system memory.
The newcomer is one of several exploring waferscale computing to accelerate larger AI/ML workloads. Tesla, for example, demonstrated its Dojo supercomputer at Hot Chips this year. For a full breakdown of either Cerebras’ waferscale computing architecture or Tesla’s Dojo platform, visit our sister site The next platform.
Speak with The registry, Sivasankaran Rajamanickam, an engineer involved in providing Cerebras technology at Sandia, expressed interest in studying how the architecture deals with sparse models and on-chip data flows. “The scale of the hardware makes it really exciting to see what we can do with it,” he said.
Cerebras is just the latest AI startup to provide its hardware under the ASC program. The Dept of Energy routinely examines heterogeneous computing platforms using a variety of CPUs, GPUs, NICs, and other accelerators to improve the speed and resolution of these simulations. To date, the agency has deployed systems from Intel, AMD, Graphcore, Fujitsu, Marvell, IBM, and Nvidia, to name a few.
“We anticipate that the technologies developed under the program will be tested on the advanced architecture prototype systems of the Advanced Simulation and Computing program and will eventually impact the production of advanced and standard technology platforms used by the three labs” , says Robert Hoekstra, senior manager of the Extreme Scale Computing Group at Sandia, said in a statement.
We have been told that the results of these trials will influence future investments by the DoE. ®
https://www.theregister.com/2022/10/18/doe_cerebras_waferscale/ DoE tests Cerebra’s AI computing in nuclear weapon simulations • The Register