NNSA


OFFICE OF ADVANCED SIMULATION AND COMPUTING AND INSTITUTIONAL R&D PROGRAMS (NA-114)

 
ASC meeting picture used as banner with ASC and LDRD logos and the words Quarterly Highlights
 
The Advanced Simulation and Computing (ASC) program delivers leading-edge computer platforms, sophisticated physics and engineering codes, and uniquely qualified staff to support addressing a wide variety of stockpile issues for design, physics certification, engineering qualification, and production. The Laboratory Directed Research and Development (LDRD) and Site-Directed Research and Development (SDRD) programs fund leading-edge research and development central to the U.S. DOE national laboratories’ core missions.

Quarterly Highlights |  Volume 7, Issue 3 | August 2024

Welcome to the August 2024 issue of the NA-114 newsletter—published quarterly to socialize the impactful work being performed by the National Nuclear Security Administration (NNSA) laboratories and our other partners. This edition begins with a highlight from Kansas City National Security Campus (KCNSC) and Sandia National Laboratories (SNL) on their collaborative development of a technique to compute a full-field system response for part reliability, testing, and failure analysis using a finite element model and sparse vibration test data on a part. Other featured highlights include:

  • Lawrence Livermore National Laboratory (LLNL) ASC program individuals appointed as “Distinguished Members of Technical Staff”the highest technical staff level achievable by a scientist or engineer at LLNL.
  • Los Alamos National Laboratory’s (LANL’s) newest Fast Sweeping Detonation model providing up to 1000x speedup compared to legacy programed burn implementations; these models support critical high explosives (HE) work.
  • TOP500 ranking of the El Capitan Early Delivery System and two other one-rack systems at LLNL, each achieving roughly 20 petaFLOPS on the High Performance Linpack benchmark.  The full El Capitan system is projected to exceed 2 exaFLOPS.

The banner image above shows the attendees at this year’s ASC Principal Investigators (PI) Meeting, hosted by LANL. The ASC PI meeting is an opportunity for the three NNSA national laboratories to meet with federal program managers and partners at other NNSA sites to share highlights from the past year and continue planning for the future.

Please join me in thanking the professionals who delivered the achievements highlighted in this newsletter and on an ongoing basis, all in support of our national security mission.

Thuc Hoang
NA-114 Office Director


In collaboration with SNL, KCNSC developed a method for evaluating a system’s full-field dynamic system response from data taken at discreet, sparse locations on the part.

Intended applications include analysis of components undergoing reliability testing and for post-mortem activities following an unintended failure.

KCNSC has developed a technique for augmenting sparse data measured from a physical test to compute a full-field system response by means of a finite element model.  Testing was performed at SNL on a simple physical structure subjected to a vibration load, as shown in Figure 1.  Through the process, a sparse set of a few dozen acceleration measurements was expanded to a larger set of hundreds of thousands of data points across the structure.  Results from the technique allow for the estimation of both the structure’s acceleration and strain response, as shown in Figure 2.  One benefit of this method is that knowledge of the applied forces is not required.  KCNSC is currently exercising this technology as a prototype on a sample circuit board, as shown in Figure 3 (NSC-614-5930 06/2024 UUR).

Figure 1: Display of the physical test structure (left), finite element model (middle), and correlated test and model data points (right).
Figure 2: Sparse and full-field data sets for acceleration (Left) and strain (Right).
Figure 3: Prototype circuit board and finite element model.

 


LLNL ASC program individuals appointed as Distinguished Members of Technical Staff.

Four ASC support researchers were among the twenty-three LLNL staff recently named Distinguished Members of Technical Staff (DMTS) for their extraordinary scientific and technical contributions, as acknowledged by their professional peers and the broader scientific community. 

DMTS is the highest technical staff level achievable by a scientist or engineer at LLNL and is a prestigious recognition on the personnel ladder.  Appointment to DMTS is reserved for LLNL scientists and engineers who have demonstrated at least one of the following:

  • A sustained history of high-level achievements in programs of importance to LLNL.
  • A sustained history of distinguished scientific and technical achievements, having become a recognized authority in the field.
  • A fundamental and important discovery that has had a sustained, widespread impact.

Congratulations to Nathan Barton, Vasily Bulatov, Richard Hornung, and Kathryn Mohror for this achievement.

Nathan Barton

Nathan Barton, Engineering, is the program group leader for Condensed Matter Physics in the Weapon Physics and Design Program at LLNL.  Barton’s fields of interest center around computational mechanics of materials; with emphasis on multi-scale methods, crystal plasticity, multi-phase materials and phase transformations, dynamic behavior, large-scale computing, and connections to diffraction-based experiments.  His achievements were recognized in 2017 when he was named a Fellow of the American Physical Society, and through several Defense Programs Award of Excellence certificates, among other honors.

 

Vasily Bulatov

Vasily Bulatov, Physical and Life Sciences, is a leader in achieving some of the largest-scale molecular dynamics simulations ever accomplished, including innovative analysis methodologies to enable them.  He led the first full-scale atomistic simulations of metal hardening, taking advantage of LLNL’s world-leading high-performance computing (HPC), representing a breakthrough in generating accurate material strength models thus bypassing mesoscale simulations with their uncontrolled uncertainties.  His research interests focus on the physics and mechanics of materials strength and degradation, microstructure and its effects on materials properties, crystal dislocations, efficient mathematical algorithms for computer simulations of complex processes, HPC, uncertainty quantification, and data sciences in engineering.

 

Rich Hornung

Richard Hornung, Computing, is a computational scientist in the Center for Applied Scientific Computing (CASC) at LLNL.  Since joining the Laboratory in 1996, Hornung has performed algorithm research and software development for a wide range of problems in HPC.  He created or co-created several open-source HPC software projects including: SAMRAI (parallel adaptive mesh refinement), RAJA (C++ abstractions for hardware architecture portability), RAJA Performance Suite, and Axom (HPC application building blocks).

 

Kathryn Mohror

Kathryn Mohror, Computing, is a computer scientist in the Parallel Systems Group in the Center for Applied Scientific Computing at LLNL.  Mohror serves as the deputy director for the LDRD program at LLNL and as the Advanced Scientific Computing Research point of contact for computer science at LLNL.  Mohror’s research on high-end computing systems has contributed greatly to ASC HPC efforts and is currently focused on input-output (I/O) performance and portability.  Her other research interests include scalable performance analysis and tuning, fault tolerance, and parallel programming paradigms.

 


Programmed burn models for high explosives are essential for stockpile stewardship.  LANL developers have completed a parallel version of the Fast Sweeping Detonation model which provides up to 1000x speedup compared to legacy programmed burn implementations.

The programmed burn class of high explosive (HE) modeling techniques is essential for many stockpile stewardship applications.  These modeling techniques pre-calculate the arrival time of detonation waves in an explosive charge given a user-specified geometry and ignition point.  Standard approaches to programmed burn calculations suffer from inaccurate computation of curvature effects or are constrained by complicated conformal meshing requirements and costly parabolic solution techniques.

The Fast Sweeping Detonation (FSD) model avoids the complications and numerical expense of legacy methods by computing the velocity field from the geometry of a charge and subsequently solving for the arrival time of the detonation using the fast sweeping method to find the numerical solution of the hyperbolic eikonal equation.  This model is now implemented in parallel and is available in XCAP’s FSD++ library.  Using the message passing interface (MPI), the model can calculate solutions to an eight-billion-zone simulation in under 30 seconds on 40 nodes of the Crossroads machine.  This is on the order of a 1000x speedup over legacy programmed burn implementations, which can take 10 hours on hundreds of nodes to perform the same calculation. ASC codes Pagosa and xRage both leverage the use of this library, allowing fast turnover on experimental designs that require multiple iterations on large 3D geometries with increased fidelity.  The model’s ability to be coupled with a detonation velocity-adjusted equation of state (EOS) makes it amenable to capturing phenomena in both conventional and insensitive high explosives (LA-UR-24-24953).

Figure 4: The FSD-predicted and experimental isochrones in the Cyclops I experiment [1].
Figure 5: The normal detonation velocity is shown as a function of geometry and material in the Cyclops I problem [1].

 

[1] G. Terrones, M. Burkett, and C. Morris, “Burn Front and Reflected Shock Wave Visualization in an Inertially Confined Detonation of High Explosive,” Proceedings of the 2011 American Physics Society Shock Compression of Condensed Matter Conference, Chicago, Illinois, 2011.

 


The El Capitan Early Delivery System and two other one-rack systems at LLNL ranked #46, 47, and 48 on the June 2024 TOP500 supercomputers list, each with performance of roughly 20 petaFLOPS on the High Performance Linpack benchmark.  When complete, the full El Capitan system is projected to exceed 2 exaFLOPS.

El Capitan on an artistic abstract background with Top500 logo
Figure 6: Unveiled at the International Supercomputing Conference in Germany, the June 2024 TOP500 lists three systems with identical components — one computing rack each from El Capitan’s “Early Delivery System” (EDS), LLNL’s newest unclassified supercomputer RZAdams and its unclassified “sister” system Tuolumne. All three registered 19.65 petaFLOPS on the High Performance Linpack (HPL) benchmark, ranking them among the world’s 50 fastest (Graphic: Amanda Levasseur/LLNL).

Unveiled at the International Supercomputing Conference in Hamburg, Germany, the June 2024 TOP500 lists three LLNL systems with identical components — one computing rack each from El Capitan’s “Early Delivery System” (EDS), LLNL’s newest unclassified supercomputer RZAdams, and its unclassified “sister” system Tuolumne.  All three registered 19.65 petaFLOPS (nearly 20 quadrillion floating point calculations per second) on the High Performance Linpack (HPL) benchmark used by the TOP500 organization to determine the world’s fastest supercomputers.  The scores ranked them 46th, 47th, and 48th in the world, respectively.

LLNL brought the El Capitan EDS cabinet online as part of the overall installation process of NNSA’s first exascale supercomputer, El Capitan, which is projected to exceed 2 double-precision exaFLOPS (2 quintillion operations per second) of peak performance, making it likely the world’s most powerful supercomputer when fully deployed.  The EDS result constitutes an early test of El Capitan’s performance.  Installation of El Capitan’s compute nodes began in March 2024 and remains ongoing, keeping the machine on schedule for initial use by NNSA Tri-Lab application teams later this year.

“We’re excited to be making significant progress on El Capitan and moving a step closer to harnessing the extraordinary power of NNSA’s first exascale supercomputer here at LLNL,” said LLNL Weapon Simulation and Computing Associate Director, Rob Neely. “This is a tangible sign of advancement towards the promise of groundbreaking achievements in scientific research and national security, and we remain on track for deployment as a critical resource for the NNSA Tri-Labs beginning this fall.”

El Capitan will be used by the Tri-Labs for applications supporting NNSA’s stockpile modernization programs, as well as its stewardship mission to ensure the safety, security, and reliability of the Nation’s enduring nuclear stockpile in the absence of underground nuclear testing.  It also will spur advancements in inertial confinement fusion (ICF) energy, high-energy-density (HED) physics, material discovery, nuclear data, material EOS and conventional weapon design (LLNL-ABS-867126).


SNL developed a next-generation, graphics processing unit-explicit method of modeling laser powder bed fusion (LPBF) additive manufacturing leading to a 300x speedup. 

Rapidly modeling a component’s thermal history can help correct and avoid defects to improve manufacturing outcomes.

The LPBF process is complex and prone to defects, such as pores and warping, that are driven by the thermal response to the laser and are difficult to correct.  Rapidly modeling the part thermal history can help correct and avoid defects and is in alignment with NNSA goals of developing a digital-based enterprise to reduce manufacturing iterations and rapidly incorporate changes to requirements.  Simulation tools are needed to model the complex physics involved in LPBF processes to improve manufacturing outcomes for Nuclear Deterrence (ND) system components.  The speedups realized by this software will enable rapid prediction of manufacturing outcomes and reduce production iteration cycles for future programs considering LPBF (SAND2024-05407M).


LANL hosted the Annual ASC Principal Investigators Meeting May 21-23, 2024

The annual ASC PI meeting is an opportunity for the three NNSA national laboratories primarily responsible for executing the ASC mission of science-based stockpile stewardship to meet with federal program managers and partners at other NNSA sites to share highlights from the past year and continue planning for the future.  This year’s meeting included three days of classified sessions with highlighted talks on current ASC HQ activities and updates from the NNSA laboratories, KCNSC, Y-12, Pantex, and SRNL on the Production Simulation Initiative (PSI) effort.

Unlike many of the other topically focused meetings ASC holds throughout the year, this meeting provides a unique chance to view ASC as an integrated program of diverse elements all working in unison to deliver on the critical national security mission of NNSA.  Responsibilities for hosting the PI meeting rotate among the three NNSA laboratories.  LANL hosted this year’s meeting, with active participation from all three labs and other NNSA sites that made for an interactive and illuminating three days in New Mexico at the LANL National Security Sciences Building (NSSB), pictured above.  The NA-114 Office recognizes the dedicated effort of the Tri-lab PI meeting organizing team for arranging this year’s meeting, with a special thanks to LANL for hosting!


The LLNL Plasma Science team implemented a refinement technique in OPUS that generates accurate plasma opacities 1000 times faster. 

Figure 7: Iron opacity at T=185 eV, ρ=0.18 g/cc.  The new method (yellow) refines the initial basis set (purple) and matches converged results (green) 1000 times faster.

Accurate radiative opacities of plasmas over a wide range of densities and temperatures are needed for most applications which rely on radiation-hydrodynamic coupling, as well as designs for experiments on NIF, AWE facilities, OMEGA, and Z.

Generating opacity tables requires significant computational power due to the extensive number of atomic arrangements (referred to as “super-configurations” or SCs) necessary for each spectrum.  This effectively prevents convergence studies across broader areas of a table.

The LLNL Plasma Science team have developed and implemented an automatic refinement technique in OPUS that can reduce the number of atomic arrangements (SCs) required to generate accurate spectra by a factor of about a thousand.  This will ensure timely delivery of reliable opacity tables while eliminating SC convergence studies during table generation.  During the last quarter LLNL also extended the formalism to correctly include the response of the system to the excited electron as well as interfaced it with LLNL’s detailed line accounting model (LLNL-ABS-867141).


The Common Modeling Framework at LANL has developed capabilities to deliver recommendations for high explosives for hydrodynamics simulations.  The framework provides high explosives material characteristics of reactants, products, burn rate models, and the source (pedigree) of the data.

Figure 8: Integrated Common Modeling Framework (CMF) illustration of workflow from model recommendations through to simulation outputs.  This example result was obtained by running the LANL Lagrangian code FLAG with input setup by CMF using material data for a high explosive.

The Common Modeling Framework (CMF) at LANL is a collaborative software environment that sets a new standard in the modeling community for knowledge capture, archiving, pedigree, and peer review.  The CMF serves as a workflow used by over a hundred scientists within the LANL simulation community.  In the context of multi-physics modeling, there is a long-standing need to provide streamlined access to the physics and engineering models recommended by subject matter experts (SMEs).  For example, the inputs required by LANL hydrodynamics codes include data that characterizes the behavior of the materials present in any given simulation.  At LANL, these data are provided by the ASC Physics and Engineering Models (PEM) subprogram.  The PEM members are the SMEs responsible for generating and releasing characteristic data to code-user communities.  In the past, it was challenging to coalesce the knowledge base of so many SMEs into one format or paradigm.  However, the general adoption of the CMF at LANL now provides the opportunity to make this data available to all framework users.

Newly developed CMF capabilities can deliver PEM recommendations from the PEM-HE Project to their ultimate users for hydrodynamics simulations.  The CMF project developed a tool that PEM-HE uses to convert data from its own internal format to files that can be read and consumed by CMF.  These released files have a 3+1 format for HE specifics: three segments containing the material characteristics of reactants, products, and burn rate models, plus one segment describing the source (pedigree) of the data.  This new capability puts official PEM-recommended models for HE at the fingertips of code users.  Extension to other PEM recommended models, such as EOS and strength, is planned for the near future (LA-UR-24-24126).


SNL developed a new fatigue damage margins code for vibration and shock.  The newly developed code expedites time-to-solution and can run where previous codes were unable to, due to memory and computing restrictions.

Environments engineers at SNL review ground test and flight test data for qualification evidence to ensure fatigue damage margins provide sufficient qualification evidence and margin as requirements and specifications are updated.  To provide more efficient tools to support the ND modernization programs, SNL researchers have developed a new fatigue damage margins code for vibration and shock.  These newly developed codes can run where previous codes were unable to, due to memory and computing restrictions.  The new environments engineering code will also significantly expedite analyses time-to-solution (SAND2024-06894M).


In a paper published in Journal of Fluids Engineering, the MARBL team at LLNL demonstrated multi-physics simulations on AMD graphics processing units including multi-group radiation diffusion and thermonuclear burn - capabilities crucial for high-energy density physics and fusion modeling.

LLNL researchers have achieved a milestone in accelerating and adding features to complex, multi-physics simulations run on graphics processing units (GPUs), a development that could advance HPC and engineering.  As LLNL readies for El Capitan, the MARBL team is preparing their code for efficient and portable performance on GPUs.  MARBL is a next-generation, multi-physics code which targets mission-relevant HED physics like those involved in ICF experiments and stockpile stewardship applications.  El Capitan is based on AMD’s cutting-edge MI300A accelerated processing units (APUs), which combine central processing units (CPUs) with GPUs and high-bandwidth memory into a single package, allowing for more efficient resource sharing.

Figure 9: A 2D MARBL simulation of the N210808 “Burning Plasma” shot performed at the National Ignition Facility at the onset of ignition. This calculation consists of 19 million high-order quadrature points and ran on El Capitan predecessor system RZAdams (on AMD MI300A GPUs). Animation by Rob Rieben.

 

In a recent paper published by the Journal of Fluids Engineering, researchers described how MARBL supports multi-physics simulations on GPUs — specifically multi-group radiation diffusion and thermonuclear burn, which are involved in fusion reactions — and the coupling of those physics models with the higher-order finite-element moving mesh for simulating fluid motion.  These capabilities, which are crucial for HED physics and fusion modeling, have been demonstrated on both AMD MI250X GPUs in El Capitan’s early access machines and on RZAdams, which is based on the MI300A APU that will be used in El Capitan.  The simulation in Figure 9 was demonstrated on the RZAdams LLNL system, which is based on the MI300A APU that will be used in El Capitan.  Read more in the detailed LLNL Newsline article by Jeremy Thomas: https://www.llnl.gov/article/51131/llnl-team-accelerates-multi-physics-simulations-el-capitan-predecessor-systems (LLNL-ABS-867133).

See a full animation of the same calculation run on a CPU system in a YouTube link here


 

SNL developed Spitfire, a new tool to precompute states for abnormal environments (e.g., aircraft crash and burn, and fuel types).  Spitfire integrates with Sierra/Fuego, SNL’s low-Mach number reacting flow code allowing speedups of an order of magnitude.

Abnormal environments can involve a range of environments (aircraft crash and burn, to inside-the-skin burning of foams and composites) and fuel types (jet fuels, airframe composites, rocket propellants, and plastics) with strongly varying heat flux contributions.  To address these environments, SNL researchers are developing advanced models and bringing research results from the global scientific community directly into Sierra/Fuego simulations to predict a range of abnormal environments using cutting-edge models.  Spitfire is a new tool developed to precompute states (i.e., density), source terms (i.e., radiative source and sink terms) and other needed quantities across a full spectrum of composition, enthalpy, and subgrid unmixedness.  Spitfire enables direct integration of these quantities with Sierra/Fuego, SNL’s low-Mach number reacting flow code allowing speedups of roughly an order of magnitude.  Quality assurance can be evaluated using tools developed within the Spitfire package to specifically address SNL’s abnormal environment needs (SAND2024-05407M).

Figure 10: This figure shows slices of precomputed tables for gas temperature and soot growth rate coefficient.  Spitfire enables direct integration of these quantities with Sierra/Fuego.

LANL’s ASC program developed a new software tool and workflow for linking simulation codes across multiple physical regimes in support of evaluation of hostile environments.  

LANL’s tool, Zelda, allows engineers and physicists to couple thermo-mechanical analyses with multi-physics simulations.

The LANL ASC program has developed a new software tool and workflow for linking simulation codes across multiple physical regimes in the Stockpile Stewardship Program.  This tool, called Zelda, allows engineers and physicists to couple thermo-mechanical analyses with multi-physics simulations smoothly to obtain more accurate predictions of stockpile systems’ performance.  As a result, LANL can now study the effects of hostile environments on the performance of the Nation's stockpile.

Figure 11: The top graphic shows the density of materials in a mock liquid-propellant rocket design from an engineering simulation.  The bottom two graphics show a zoom-in of a rocket showing deformation and von Mises stress in the 3D engineering mesh (left) and in the finer Eulerian mesh (right).

Traditionally, engineering simulations and multi-physics simulations of systems were conducted independently.  However, as the fidelity of individual simulations has increased, it is important that analysts perform coupled simulations that span multiple physical regimes with the results of one regime feeding the other.  Such inputs may include upstream (engineering) effects, which affect the predicted performance in a downstream (multi-physics) simulation code.  The Zelda software package gives analysts the ability to load engineering simulation results, process the data, and transfer that data accurately to a finer-resolution mesh, which is suitable for a multi-physics code.  Zelda's Python application programming interface (API) allows users to create scriptable, reproducible workflows that can be used in LANL's Common Modeling Framework (CMF).  One aspect of Zelda's linking workflow is the transmission of any geometric deformations introduced by the upstream analysis to the mesh used in the downstream analysis.  The second is the alignment of material models between codes so that any transferred or transformed physical variables remain meaningful during the “handshake.”  The final aspect is the actual accurate transfer of solution data, such as densities and temperatures.  This transfer is performed using the Portage remapping software package, which was developed as part of LANL's ASC Advanced Technology Development and Mitigation (ATDM) efforts.  Portage, and the closely related interface reconstruction package, Tangram, are also being used in the Ristra and Eulerian Application projects within the LANL ASC program.

In addition to being a software problem, code-to-code linking is also a communication challenge.  Specifically, the difficulty lies in getting domain scientists to speak the same language and agree on the terms of handoff.  To ease this communication challenge, the Puma team has developed a questionnaire for engineers and physicists that helps clarify these issues.

Currently, Zelda can link engineering simulations to LANL's Lagrangian Applications Project and Eulerian Applications Project going either from a 3D-to-3D simulation or a 3D-to-2D simulation.  Both capabilities have been exercised in various capacities to study the effect of hostile environments on system performance.  Future work includes linking aging effects and other defects from upstream simulations and developing a return link from physics to engineering to enable design iterations (LA-UR-24-24126).

Welcome Aboard…

LLNL ASC program

Karlene Maskaly

Karlene Maskaly has been selected as LLNL ASC’s Verification and Validation (V&V) Funding Stream Point of Contact (FSPOC).  Responsibilities include first POC for stakeholders, coordination of reviews and presentations on ASC V&V funded work, budget planning, Implementation Plan (IP) updates, and representation to HQ for V&V activities and plans.  Karlene is currently a design physicist in the Design Physics Division (DPD), contributing as a lead designer for the NIF High-Z Materials Campaign; is an individual contributor under the Nuclear Counterterrorism Counterproliferation (NCTCP) program; and is a group leader in DPD since 2021.  Karlene earned a B.S. in physics and a Ph.D. in materials science from MIT and was formerly a postdoc and staff member at LANL before joining LLNL.


LANL ASC program

Ed Dendy

Ed Dendy is the new LANL ASC Deputy Program Director.  Ed has extensive experience in and around the ASC program having had program roles for various projects and line responsibility over Eulerian Codes under X-3.  Ed first came to LANL in 1994 as a post-doc, after receiving his M.S. and Ph.D. in Chemical Engineering from the University of Notre Dame.  Over the years he has contributed to many aspects of numerical simulation of multi-physics phenomena and complex fluid flow for a variety of problems and mission challenges.  He is returning from an Intergovernmental Personnel Act position with the Department of Defense (DoD) where he focused on Nuclear Weapons Council activities and the reentry vehicle industrial base.

 

John Patchett

John Patchett is the new LANL Computational Systems and Software Environments (CSSE) Program Manager.  He has led CSSE’s Application Visualization project since 2019 and has worked for the ASC program since 1999, when he joined the Advanced Computing Laboratory’s Systems Team.  His initial task was to explore the development of Windows-based distributed memory cluster for the purpose of parallel rendering, compositing, and delivery to multi-panel displays.  He has a B.A. in anthropology and a M.S. in Computer Science from UNM and Dr. -Ing in Computer Science from TU Kaiserslautern.

 


SNL ASC program

Remi Dingreville

Remi Dingreville has joined the SNL ASC program as the Principal Investigator for the Machine Learning Initiative to develop novel reduced-order models to predict the response of composite materials operating in extreme environments.  This capability opens new possibilities for multi-scale, multi-physics simulations for a heterogeneous structure, especially for the assessment of printed parts.  Remi comes to the SNL ASC program with a background and experience in machine learning and computational materials science.  While Remi is out of the office, he enjoys the outdoors and is an avid trail runner in the summer and fall, and a skier in the winter.

 


NNSA LDRD/SDRD Quarterly Highlights

NNSS SDRD: Cryogenic deuterium pellet injection for enhanced neutron output of a dense plasma focus (DPF).

Principal Investigator Danny Lowe aims to increase DPF neutron output by freezing deuterium gas to the temperature of outer space.

In his FY24 SDRD project “Cryogenic Deuterium Pellet Injection for Enhanced Neutron Output of a Dense Plasma Focus,” Daniel Lowe is developing methods to increase the neutron yield of a DPF using cryogenic deuterium techniques.  By increasing the neutron output of the DPF, this project aims to enable increased accuracy in support of the Neutron Diagnosed Subcritical Experiments (NDSE) program and hopes to offer an alternative neutron irradiation environment to support survivability programs.  This SDRD project is especially unique due to its use of deuterium, an isotope of hydrogen, in a solid form.  Daniel and his team freeze deuterium gas into solid ice at a temperature between 4 and 13 Kelvin (or -452 degrees Fahrenheit), which is approximately the temperature of outer space.  Hydrogen isotopes that have been frozen into a solid form are very rare because they do not naturally occur on earth in the solid phase.

To cryogenically freeze deuterium gas, Daniel and his team have recently begun using liquid helium.  Liquid helium is significantly colder than liquid nitrogen, which has often been used at NNSS for radiation detection and specialized vacuum systems.  Specialists from Oak Ridge National Laboratory (ORNL) helped train five NNSS employees involved with this project on the correct methods to handle and transport liquid helium, and these employees have been successfully implementing ORNL’s strategies.  This is the first training of its kind to have been done at the NNSS, and it enables future R&D efforts that may require cryogenically frozen deuterium targets (This article is featured on the NNSS website here).

 

Figure 12: Fast optical images of DPF plasma sheath. Image courtesy: SDRD

LLNL LDRD: Manufacturing optimized designs for shaped charges.

Figure 13: Project DarkStar leverages artificial intelligence and machine learning to optimize shaped charges—explosive devices used to manipulate metals.  (Image: Carol Le/LLNL and Adobe Stock)

When materials are subjected to extreme environments, they face the risk of mixing together.  This mixing may result in hydrodynamic instabilities, yielding undesirable side effects.  Such instabilities present a grand challenge across multiple disciplines, especially in astrophysics, combustion and shaped charges.  Shaped charges are devices used to focus the energy of a detonating explosive, creating a high velocity jet that is capable of penetrating deep into metal, concrete or other target materials.

To address the challenges in controlling these instabilities, researchers at LLNL are coupling computing capabilities and manufacturing methods to rapidly develop and experimentally validate modifications to a shaped charge.  This work, published in the Journal of Applied Physics, is a part of Project DarkStar, an LDRD-funded strategic initiative aimed at controlling material deformation by investigating the scientific problems of complex hydrodynamics, shockwave physics, and energetic materials (Read more in LLNL’s article available online here).

LANL LDRD: Breakthrough boosts quantum AI.

A groundbreaking theoretical proof shows that a technique called overparametrization enhances performance in quantum machine learning for applications that stymie classical computers.  “We believe our results will be useful in using machine learning to learn the properties of quantum data, such as classifying different phases of matter in quantum materials research, which is very difficult on classical computers,” said Diego García-Martín, a postdoctoral researcher at LANL.  He is a co-author of a new paper by a LANL team on the technique in Nature Computational Physics.  García-Martín worked on the research in the Laboratory’s Quantum Computing Summer School in 2021 as a graduate student from Autonomous University of Madrid.

Figure 14: Quantum machine learning algorithms, like their classical counterparts, can get lost in a training landscape.  A new proof by LANL scientists shows that a technique called overparametrization enables quantum machine learning algorithms to find the highest point in the landscape — the solution to a problem — without getting stuck on false peaks.  Credit: Dreamstime

Machine learning, or artificial intelligence (AI), usually involves training neural networks to process information — data — and learn how to solve a given task.  In a nutshell, one can think of the neural network as a box with knobs, or parameters, that takes data as input and produces an output that depends on the configuration of the knobs.  “During the training phase, the algorithm updates these parameters as it learns, trying to find their optimal setting,” García-Martín said.  “Once the optimal parameters are determined, the neural network should be able to extrapolate what it learned from the training instances to new and previously unseen data points.”

Both classical and quantum AI share a challenge when training the parameters, as the algorithm can reach a sub-optimal configuration in its training and stall out.  Overparametrization, a well-known concept in classical machine learning that adds more and more parameters, can prevent that stall-out.  The implications of overparametrization in quantum machine learning models were poorly understood until now.  In the new paper, the LANL team establishes a theoretical framework for predicting the critical number of parameters at which a quantum machine learning model becomes overparametrized.  At a certain critical point, adding parameters prompts a leap in network performance and the model becomes significantly easier to train.  “By establishing the theory that underpins overparametrization in quantum neural networks, our research paves the way for optimizing the training process and achieving enhanced performance in practical quantum applications,” explained Martín Larocca, the lead author of the manuscript and postdoctoral researcher at LANL.  By taking advantage of aspects of quantum mechanics such as entanglement and superposition, quantum machine learning offers the promise of much greater speed, or quantum advantage, than machine learning on classical computers.

To illustrate the team’s findings, Marco Cerezo, the senior scientist on the paper and a quantum theorist at LANL, described a thought experiment in which a hiker looking for the tallest mountain in a dark landscape represents the training process.  The hiker can step only in certain directions and assesses their progress by measuring altitude using a limited GPS system.  In this analogy, the number of parameters in the model corresponds to the directions available for the hiker to move, Cerezo said.  “One parameter allows movement back and forth, two parameters enable lateral movement and so on,” he said.  A data landscape would likely have more than three dimensions, unlike our hypothetical hiker’s world.

With too few parameters, the walker can’t thoroughly explore and might mistake a small hill for the tallest mountain or get stuck in a flat region where any step seems futile.  However, as the number of parameters increases, the walker can move in more directions in higher dimensions.  What initially appeared as a local hill might turn out to be an elevated valley between peaks.  With the additional parameters, the hiker avoids getting trapped and finds the true peak, or the solution to the problem.  (LA-UR-23-26756, this article is available on LANL’s website here).