NNSA


OFFICE OF ADVANCED SIMULATION AND COMPUTING AND INSTITUTIONAL R&D PROGRAMS

 
May 2025 banner
 
The Advanced Simulation and Computing (ASC) program delivers leading-edge computer platforms, sophisticated physics and engineering codes, and uniquely qualified staff to support addressing a wide variety of stockpile issues for design, physics certification, engineering qualification, and production. The Laboratory-Directed Research and Development (LDRD) and Site-Directed Research and Development (SDRD) programs fund leading-edge research and development central to the U.S. Department of Energy (DOE) national laboratories’ core missions.

Quarterly Highlights |  Volume 8, Issue 2 | May 2025

Welcome to the second 2025 issue of the ASC newsletter - published quarterly to socialize the impactful work being performed by the National Nuclear Security Administration (NNSA) laboratories and our other partners.  This edition begins with a special thanks to Sandia National Laboratories (SNL), the Trilab planning team, and the speakers that presented during this year’s ASC Principal Investigators (PI) meeting, hosted by SNL-NM in mid-May.  Other featured highlights in this edition include: 

  • Lawrence Livermore National Laboratory (LLNL)-led Multi-Agent Design Assistant (MADA) Trilab development team utilizing artificial intelligence (AI) agents to perform Inertial Confinement Fusion (ICF) physics simulations.
  • Los Alamos National Laboratory’s (LANL’s) breakthrough approach to equation of state (EOS) and opacity modeling producing mutually consistent EOS and opacity values.
  • SNL’s novel simulation-in-the-loop approach to reduce additive manufacturing distortions for W87-1 parts.
  • A warm “welcome” to the new staff at the labs, including new lab leads in Computational Systems and Software Environment (CSSE) and Facilities, Operations and User Support (FOUS) subprograms.

Please join me in thanking the professionals who delivered the achievements highlighted in this newsletter and on an ongoing basis, all in support of our national security mission.

Dr. Stephen Rinehart
Assistant Deputy Administrator, ASC


SNL-NM hosted the Annual ASC Principal Investigators Meeting. 

The annual ASC Principal Investigators (PI) meeting is an opportunity for the three NNSA national laboratories primarily responsible for executing the ASC mission of science-based stockpile stewardship to meet with federal program managers and partners at other NNSA sites to share highlights from the past year and continue planning for the future.  This year’s meeting, held on May 13-15, 2025, included three days of classified sessions focused on the theme Accelerating ASC Impacts, with highlighted talks on current ASC HQ activities and updates from the NNSA laboratories, the W93 team, Y-12, Pantex, Kansas City National Security Campus, and Savannah River National Laboratory.  The meeting session topics included:

  • ASC Accomplishments, Challenges, and Opportunities
  • ASC Opportunities to Accelerate Early Phase (1-3) Programs
  • Computing, Simulation, and AI Opportunities and Challenges for the Production Agencies
  • Opportunities for AI to Accelerate the Complex 
  • Digital Engineering, Future Computing Environments, and the Common Modeling Framework (CMF)
  • ASC Impacts/Dependencies on other Agencies/Offices
Figure 1: Attendees at the May 2025 ASC PI meeting in SNL-NM.

Unlike many of the other topically focused meetings ASC holds throughout the year, this meeting provides a unique chance to view ASC as an integrated program of diverse elements all working in unison to deliver on the critical national security mission of NNSA.  Responsibilities for hosting the PI meeting rotate among the three NNSA laboratories.  SNL-NM hosted this year’s meeting, with active participation from all three labs and other NNSA sites that made for an interactive and illuminating three days in New Mexico.  The ASC HQ office recognizes the dedicated effort of the Tri-lab PI meeting organizing team for arranging this year’s meeting, with a special thanks to SNL-NM for hosting! 
 

 


An AI agent developed by a LLNL-led Trilab team successfully generated and simulated an ensemble of ICF capsule implosions from a natural language prompt and capsule diagram.

The LLNL-led Multi-Agent Design Assistant (MADA) Trilab development team has taken the first steps toward utilization of AI agents to perform an ICF physics simulation.  An “agent” is an AI that is comprised of both a large-language model (LLM) along with tooling (an executable function) to perform a very specific task.  As part of the overall MADA project, an Inverse Design Agent (IDA) is being developed with the goal of accepting unstructured inputs, in the form of natural language and/or images, and generating simulation decks (which are highly structured formats based on the Lua programming language) for the LLNL next-generation multiphysics code MARBL.

Figure 2: Early demonstration of the Multi-Agent Design Assistant (MADA).

In this demonstration, an open source LLM received an image of an ICF capsule along with a natural language description from an ICF designer asking for simulations to explore geometry variations around the baseline 3 mega-joule National Ignition Facility (NIF) capsule design.  An LLM was fine-tuned (i.e., retrained) with all the MARBL documentation and example decks to provide a custom-trained LLM capable of correctly outputting simulation decks.  An ensemble of 3,000 ICF simulations, using tooling provided by the agent, were run on the LLNL high-performance computer (HPC) system, Tuolumne, to generate a training set from which a machine learning (ML) model (PROFESSOR, previously developed at LLNL) was then trained.  Once trained, the PROFESSOR model generated plots of implosion time histories (i.e., radius as a function of time) that change instantaneously when the human designer changes the input geometry in the ML model.  This provides a powerful new tool to ICF designers that is made possible with these AI/ML plus HPC resources (LLNL-ABS-2007229).
 

 


SNL approach impacts W87-1 design decisions with distortion compensation for additive manufacturing.

Figure 3: A comparison of distortion predictions for different design iterations of the PEU housing. Distortion magnitude (m) as shown represents the deviation from the as-designed geometry.

Metal additive manufacturing often faces significant challenges, such as distortions that can prevent parts from meeting required geometric specifications.  Researchers within the ASC Program at SNL addressed these challenges by utilizing a novel simulation-in-the-loop approach to predict and reduce distortions during manufacturing.  This method was applied to evaluate multiple designs for the package electronic unit (PEU) housing of the W87-1, identifying the design that experienced the least distortion.  By integrating simulations into the design process, this approach improved the quality of the final product while shortening overall development timelines.  Ultimately, this innovation enables safer and more reliable use of additively manufactured components in critical engineering applications (adapted from SAND2025-05492M).


LANL’s breakthrough approach to EOS and opacity modeling produces mutually consistent EOS and opacity values.

Figure 4: The figure shows predictions of the new model (VS) and OPLIB for oxygen at a temperature of ~2,000,000 K, and two different plasma mass densities (0.11 and 1 g/cm^3). Agreement between the methods is encouraging at the lower density, while some differences at high density point to the importance of including self-consistent plasma effects in the new model.

Opacity is used to predict the generation and transport of x-rays in hot matter.  A disagreement between theoretical opacity models and opacity measurements carried out at the Z-pinch facility at SNL and preliminary results from the opacity campaign at the NIF at LLNL have prompted efforts to test and improve theoretical atomic models.  Recently, a multi-laboratory opacity analysis is identifying approaches for improving the accuracy of ASC simulations. 

LANL’s previous plasma EOS models described plasma effects self-consistently, but did not include atomic structure of sufficient fidelity to yield accurate opacity calculations.  LANL’s Physics and Engineering Models (PEM) atomic hybrid EOS-opacity effort has developed a new relativistic, variational formalism and computer code (see Starrett et al, Phys. Rev. E 2024) that produces consistent and high-fidelity EOS and opacities.  This represents a breakthrough in consistent EOS and opacity modeling and improves both LANL’s EOS and opacity efforts.  Accurate EOS was demonstrated through agreement with trusted simulations based on density functional theory.  Predictions of excitation energies were made and compared to experiments in the dense plasma regime and were found to be in good agreement.  Some differences were observed for highly charged states; however, LANL researchers understand their origin.  The new method developed is a major step forward, and code development for applying the method to materials of interest is underway.  Very recently, LANL began to compare predictions of the opacity of oxygen (an ongoing target of Z-machine and NIF opacity measurements) from the new model to the Opacity LIBrary (OPLIB) database, as shown in Figure 4 (LA-UR-25-21107).

 


SNL’s SIERRA simulation code suite reduces workload for analysts with automated solver settings.

Figure 5: The time to solve linear problems improved by changing the old solver settings to a new, smarter method in a performance test case on CTS-2. Users only need to use the simple input format provided.

Multiphysics simulations, such as those involving ablation, thermal batteries, and detonators are crucial for SNL’s nuclear deterrence mission, but efficient solutions rely upon optimal solver parameter selections that are very difficult for users to determine.  The SNL SIERRA/Thermal Fluids team has made significant strides by developing heuristics for the solver that resulted in a remarkable 3.7x speedup on Commodity Technology System-2 (CTS-2) and 9.6x speedup on Advanced Technology System-2 (ATS-2) in linear solver time for ablation simulations, with end-to-end speedups of 1.6x on CTS-2 and 2.8x on ATS-2.  By simplifying the solver settings to just five simple lines in the input deck, this feature not only reduces the complexity and required computational time for users, but also addresses a long-standing request from weapon analysts for more automated solutions in multiphysics problems.  As a result, this accomplishment opens the aperture for development teams to offer more automated meshing and remeshing capabilities for analyses (SAND2025-05492M).


LANL developed a programmable data access accelerator that tackles the memory wall in HPC.

Nuclear security applications are distinguished by the complexity of coupled multiphysics and the space complexity of high-resolution, many materials, and multiple-space representations within a single application.  Current state-of-the-art simulations require 1 to 2 petabytes of system memory because of the large spatial domain coupled with the large dynamic range of spatial and state space. This space complexity requires the use of sparse data structures such as adaptive mesh refinement and sparse material representations.  Physics algorithms adapted to these data structures result in indirect memory access patterns throughout the simulation.

Figure 6: DX100 architecture.

With these workloads in mind, LANL and the University of Michigan have designed and prototyped DX100, a programmable data access accelerator that significantly improves application performance across a broad set of application workloads commonly used in simulation and data analysis.  DX100 addresses the most pressing bottleneck exhibited by indirect memory access bound applications: the memory wall.  Through a deep iterative codesign approach, DX100 delivers improved efficiency and performance for these workloads (Figure 6). 

Figure 7: (a) DDR4 DRAM organization; (b) memory access techniques for reordering for improved row buffer hit rate.

Through reordering, interleaving, and coalescing memory requests, DX100 can provide up to 17.8 times speedup in memory access performance.  DX100 efficiently offloads indirect memory accesses and associated address calculation operations while simultaneously optimizing memory accesses to maximize the effective bandwidth of the memory system.  Figure 7 illustrates the DX1000 reordering technique to improve row-buffer hit rate and therefore achieved memory bandwidth. 

To support the DX100 accelerator without significant programming efforts, LANL developed a set of Multi-Level Intermediate Representation (MLIR) compiler passes that automatically transform legacy code to use the DX100.  Experimental evaluations on 12 benchmarks, spanning scientific computing, database, and graph applications, show that DX100 achieves performance improvements of 2.5 times over a multicore baseline and 1.9 times over a state-of-the-art indirect prefetcher.

Figure 8 illustrates the performance improvements achieved using DX100 across a diverse set of workloads, including access patterns commonly found in both the Eulerian Applications Project (EAP) and the Lagrangian Applications Project (LAP).  Key access patterns and kernels found in LAP, including calculating the gradient of a zone-centered field at mesh points (GZP) and calculating the gradient of a zone-centered field at the zone centers (GZZ), achieve 5.3 times and 4.6 times improvement, respectively.  Across a broad set of data analysis and modeling/simulation workloads, DX100 provides 2.5 times speedup (geo-mean) across these benchmarks. 

Figure 8: DX100 speedup for different workloads.

Hardware/software codesign is critical to the continued improvement in platforms and codes for the ASC program.  The DX100 is an example of the significant improvements in application performance that can be achieved through this approach.  This work will be published in the premier forum for new ideas and experimental results in computer architecture: Khadem, Alireza (UMich) & Kamalavasan Kamalakkannan (LANL), et al.  “DX100: A Programmable Data Access Accelerator for Indirection,” will appear in the 2025 International Symposium on Computer Architecture (LA-UR-25-23865).

 


 

 

LLNL’s Kull code team deploys new plasma physics capabilities enhancing the study of key phenomena for NIF experiments, accelerating some simulations by a factor of seven.

LLNL’s Kull code team deployed a new load-balancing scheme that improves throughput of non-local thermal equilibrium (NLTE) models to provide a much higher fidelity modeling of x-ray drives used for target packages of interest.  The new NLTE model distributes the NLTE kinetics problems evenly over all available processors eliminating the bottleneck resulting from some processors having too many NLTE zones and others sitting idle with none.  This scheme leveraged components of LLNL’s Conduit library to quickly implement this capability. 

Many experiments utilize a heated hohlraum made of high-Z material (such as gold or depleted uranium) to drive a target package by creating an x-ray bath.  Accurately modeling x-ray drives require high-fidelity treatment of the opacity and emissivity of the hohlraum material and is complicated by highly disparate material and radiation temperatures.  NLTE models handle this complexity (see e.g., Jones et al. 2017).  However, the NLTE treatment solves an expensive atomic kinetics problem in each zone where the NLTE hohlraum material resides.  This expense has made NLTE treatment prohibitively expensive for widespread use.

The Sonoma radiation flow experiment fielded on the NIF illustrates the improvement of modeling.  This experiment heated a hohlraum with 96 laser beams to create an x-ray bath.  A speedup factor of approximately 7x was achieved using the new load-balancing capability.  Figure 9 shows this simulation at t=2.7 nanoseconds (ns).  The NLTE workload is concentrated on the hohlraum wall, which would severely impact processors covering this region without the new load balancing (LLNL-ABS-2002655).
 

Figure 9: The color map shows material density in log scale, while the beam power for individual laser ray traces is shown in the yellow-green color map.

 


ASC Tri-Lab Remote Computing Enablement project demonstrates classified cross-site continuous integration capability.

The ASC Tri-Lab Remote Computing Enablement (RCE) project has successfully launched a groundbreaking classified cross-site continuous integration (CI) workflow for HPC users at LANL, LLNL, and SNL.  This innovative solution streamlines code development by automating testing and integration of code changes, ensuring compatibility across compute platforms and software quality.  It allows teams to initiate remote computing directly from their local GitLab repositories without requiring remote logins or authentication processes.  The first deployment enabled LLNL’s Sierra compute system for LANL and SNL users in late March, followed by LANL’s Crossroads system for LLNL and SNL users in early April.  The upcoming integration of El Capitan will further expand these capabilities.  This advancement is powered by the Jacamar GitLab custom executor module, developed through the Exascale Computing Project CI initiative (LLNL-ABS-2007252).


LANL’s deep learning backpropagation algorithm implemented on a neuromorphic chip uses only mechanisms found in the brain.

It has been argued that most modern ML algorithms are not neurophysiologically plausible.  In the neuroscience community, this has been claimed of the backpropagation (BP) algorithm, the workhorse of modern deep learning.  However, BP is known to solve the optimization problem of how a global objective function can be related to a local synaptic modification in a network.
In a recent paper, “The backpropagation algorithm implemented on spiking neuromorphic hardware,” a team funded by LANL’s ASC Beyond Moore’s Law program (among other funding sources over seven years) implemented the BP algorithm using only mechanisms that are found in the brain.  This study presents a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing.  The team implemented their algorithm on Intel’s Loihi neuromorphic research processor. 

Figure 10: Neuromorphic deep-learning algorithm deployment could herald a low-power processing solution for AI.

This is the first work to show a spiking neural network (SNN) implementation of the exact BP algorithm that is fully on-chip without a computer-in-the-loop in a multi-layer context.  It is competitive in accuracy with off-chip trained SNNs and achieves an energy-delay product suitable for edge computing.  This implementation shows a path for using in-memory, massively parallel neuromorphic processors for low-power, low-latency implementation of modern deep learning applications.  This project won a 2022 R&D 100 Award and is highlighted on the Nature Communications “Applied physics and mathematics” Focus page (LA-UR-25-22558).
 


AI Safety Institute red teaming exercise supported by SNL and LLNL. 

A joint SNL and LLNL team successfully performed an AI LLM red teaming pilot.  The pilot focused on biosafety with the AI Safety Institute of the Department of Commerce, as requested in an October 2024 National Security Memo.  Researchers used computing resources at both SNL and LLNL, including an El Capitan supercomputer testbed, to deploy several LLMs and develop user interfaces for real-time interaction with the models.  Despite the compressed timescale, the team successfully overcame several technical challenges and established multiple risk mitigation strategies to ensure a smooth live demonstration.  The exercise, performed at SNL’s office in Washington, DC, allowed the AI Safety Institute to achieve their milestone rapidly.  With assistance from DOE, NNSA, Department of Homeland Security, Defense Intelligence Agency, and subject matter experts from several other national labs, the exercise highlights the capabilities of NNSA’s cross-lab computing environment.  The process developed by the NNSA team can be repeated in the future to quickly deploy and evaluate emerging large language and frontier models as they are released (SAND2025-02747N).

 


 

Welcome Aboard…

LLNL ASC program

Cyrus Harrison

Cyrus Harrison has been selected as the new Weapon Simulation and Computing (WSC)/Computational Physics (CP) Project Coordination Council (PCC) Workflow Project Leader, effective May 16, 2025.  Cyrus has 20 years of experience supporting HPC Scientific Visualization and Data Analysis at LLNL.  He is the Project Leader of the VisIt open-source visualization tool, leading the technical direction of VisIt and related products.  He enjoys building software that enables simulation users to ask-and-answer any question (small to monumental).  He has also advocated for and supported WSC/CP's modular software strategy, open-source software, and shared software engineering solutions.  Cyrus has over 10 years of line management experience in Computing, including 7 years as Deputy Division Leader of the Applications, Simulations, and Quality (ASQ) Simulation Section.  In this role he helped manage a large portion of WSC/CP's computer science workforce.  Cyrus has a Bachelor of Science and a Master of Engineering in Computer Science and Engineering from the University of Florida.  Outside of work he enjoys his family and being a dad.  LLNL extends its gratitude to Dan Laney for his dedicated service in this role since its creation in 2015, and also thanks Matt Nelms for stepping in as acting Workflow Project Leader over the past four months.


LANL ASC program

Andres Quan

Andres Quan is a Computer Scientist working within the Facility and Operations and User Support (FOUS) subprogram at LANL.  He has his Bachelor of Science in Computer Science from New Mexico Institute of Mining and Technology, his Masters in Computer Science from the University of New Mexico, and is currently pursuing a PhD from the University of New Mexico with a focus on the simulation and optimization of schedulers for task-based programming models.  Andres started at LANL in 2014 and has had the opportunity to work on a variety of projects relating to systems monitoring, cluster management, interfaces for x-ray free electron laser (XFEL) crystallography, task-based programming languages, and natural language processing.  Andres previously worked under Dr. Hamdy Soliman in his Machine Learning and Sensor systems laboratory at New Mexico Tech with a focus on localization of human targets from arial imagery using convolutional neural networks. In his free time he enjoys video games, aquaponic gardening, and cooking.

Hank Wikle

Hank Wikle is a scientist working within the FOUS subprogram at LANL. He began at LANL as a student in the 2023 Supercomputing Institute and has been a member of HPC Division’s Programming and Runtime Environments team since January of 2024.  Hank grew up in Santa Fe, NM and holds a Bachelor of Computer Science from the University of New Mexico.  In his free time Hank enjoys reading, hiking, and watching movies.

 


SNL ASC program

Brooke Hejnal

Brooke Hejnal is a postdoctoral appointee and works within the Organic Material Decomposition Team assisting with the development of modeling capabilities for advanced and additive manufacturing at various fidelities.  Specifically, Brooke is working to enrich a low-fidelity physics-based foam decomposition model used in large-scale abnormal thermal analysis.  Before coming to SNL, Brooke was a mathematics PhD student at Purdue University.  In her personal time, Brooke enjoys going on runs by the river, watching movies, herding sheep in Catan, and engaging in trivia night every Wednesday.

 

Brian Phung

Brian Phung is developing and extending methods for simulating contact using the Schwarz Alternating Method for the High & Low Fidelity Models project.  Before coming to SNL, Brian was a postdoc at the University of Utah, performing research involving interpretable ML and simulating fractures in additively manufactured metals.  He earned his PhD at the University of Utah, where he developed tools to simulate and study microstructurally small fatigue crack growth using crystal plasticity finite element models and adaptive remeshing.  In his personal time, Brian enjoys mountain biking, backcountry skiing and splitboarding, and trail running.

 


Spotlight at LLNL: Kathryn Mohror appointed as CSSE Deputy and CASC Division Leader at LLNL.

Kathryn Mohror

Long associated with the ASC Program, Dr. Kathryn Mohror has new formal responsibilities as Livermore Computing’s Deputy Leader for CSSE, and as the Division Leader (DL) for the Center for Applied Scientific Computing (CASC) in the Computing Principal Directorate at LLNL.  As CASC DL, Kathryn is responsible for providing technical leadership and management oversight for approximately 170 PhD-level researchers in the areas of applied mathematics, computer science, and data science for HPC.  The work conducted within CASC ensures that LLNL programs are provided with leading-edge computational innovations and solutions.  Kathryn brings to the position a broad technical background coupled with extensive management experience.  In her CSSE role as Deputy to Matt LeGendre, she will continue to lead Scalable Restart Checkpoint (SCR) efforts that are critical to the El Capitan input/output (I/O) model.  Kathryn’s technical work has focused primarily on I/O and programming models and tools for advanced computing, including leadership of SCR and UnifyFS projects at LLNL.  Her expertise has earned her the Distinguished Member of Technical Staff distinction at LLNL as well as many external honors.

 

Spotlight at SNL: Welcome Lance Hutchinson, new FOUS Exec

Lance Hutchinson

On March 22, 2025, Lance Hutchinson transitioned in a lateral move to lead group 9320, HPC and Mission Computing Capabilities.  He will also serve as the FOUS Subprogram Lead within the ASC program.  Lance has been an integral part of the Advanced Solutions Engineering group within SNL’s Cybersecurity and Mission Computing Center, where he has played a vital role in delivering critical capabilities in applied computer science and cybersecurity research across multiple program areas.  He holds degrees in computer engineering with minors in business administration and mathematics from the University of Nevada – Reno, and completed master’s studies in electrical and computer engineering at the University of Florida.

 


NNSA LDRD/SDRD Quarterly Highlights

SNL LDRD: Vanishing atoms can ruin quantum calculations. Scientists have a new plan to locate leaks.

Figure 11: Matthew Chow, center, and Bethany Little discusses with Yuan-Yu Jau, off camera, the first practical way to detect atom loss for neutral-atom quantum computing at SNL (Photo by Craig Fritz).

Atoms carrying information inside quantum computers, known as qubits, sometimes vanish silently from their posts.  This problematic phenomenon, called atom loss, corrupts data and spoils calculations.  But SNL and the University of New Mexico have, for the first time, demonstrated a practical way to detect these “leakage errors” for the neutral-atom platforms.  This achievement removes a major roadblock for one branch of quantum computing, bringing scientists closer to realizing the technology’s full potential.  Many experts believe quantum computers will help reveal truths about the universe that are impossible to glean with current technology.

“We can now detect the loss of an atom without disturbing its quantum state,” said Yuan-Yu Jau, SNL Atomic Physicist and PI of the experiment team.  In a paper recently published in the journal PRX Quantum, the team reports its circuit-based method achieved 93.4% accuracy.  The detection method enables researchers to flag and correct errors.  Read more in the SNL LabNews article.

NNSS SDRD: Replacing photomultiplier tubes with avalanche photodiode arrays.

Figure 12: New automated test setup (Image credit: NNSS)

Principal Investigator, James Mellott, believes that solid state is the future of radiation detection as photomultiplier tubes become increasingly obsolete in commercial industry.  James and his team have partnered with the University of Nevada, Las Vegas to design an avalanche photodiode array and read-out circuitry that are fast enough to measure prompt radiation (read more in the NNSS article).

 

Dr. Amber Guckes of NNSS receives prestigious presidential honor.

Figure 13: Dr. Amber Guckes (Image credit: NNSS)

NNSS Technical Manager, Amber Guckes, was recently recognized with a Presidential Early Career Award for Scientists and Engineers (PECASE), the highest honor bestowed by the U.S. government on outstanding early career scientists and engineers.  Amber was honored for her research on developing next-generation current mode radiation detectors for Stockpile Stewardship applications, including her work on a 2019 SDRD project (for more information, see the NNSS article).

 

Pantex PDRD: Pioneering a new approach to additive manufacturing.

Figure 14: An artifact build produced by the Pantex Plant’s Technology Development and Hardin-Simmons University team using AM and promising ML techniques (Image credit: Pantex).

The additive manufacturing (AM) process has presented some inherent challenges - for instance, voids developed in the laser melting build of each layer.  These voids go undetected until the part is tested when failure of the part is evidenced.  With this failure, the part must be discarded, and the process must begin again. This process results in waste of materials, energy to run the equipment, and production time.  The Pantex Plant’s Technology Development team, comprised of committed engineers, scientists, and technicians who support production, security, and technical infrastructure improvements, started with a question — what if?  Working in partnership with Hardin-Simmons University, the Pantex team developed an algorithm using ML software to help detect AM defects using the data from photodiodes and computed tomography (CT) scans.  Read more in the Pantex article.

 


Questions? Comments? Contact Us.

ASC Assistant Deputy Administrator: stephen.rinehart [at] nnsa.doe.gov (Dr. Stephen Rinehart)

ASC Deputy Assistant Deputy Administrator: thuc.hoang [at] nnsa.doe.gov (Thuc Hoang)

Program Director for Computing: simon.hammond [at] nnsa.doe.gov (Si Hammond)

Program Director for Simulation: anthony.lewis [at] nnsa.doe.gov (Anthony Lewis)

  • Integrated Codes:james.peltz [at] nnsa.doe.gov (Dr. James Peltz)
  • Physics and Engineering Models: robert.spencer [at] nnsa.doe.gov (Robert Spencer)
  • Verification and Validation/PSAAP/CSGF: david.etim [at] nnsa.doe.gov (David Etim)
  • Capabilities for Nuclear Intelligence: anthony.lewis [at] nnsa.doe.gov (Anthony Lewis)
  • Computational Systems and Software Environment: simon.hammond [at] nnsa.doe.gov (Si Hammond), sara.campbell [at] nnsa.doe.gov (Sara Campbell)
  • Facility Operations and User Support: michael.lang [at] nnsa.doe.gov (K. Mike Lang)
  • LDRD/SDRD: anthony.lewis [at] nnsa.doe.gov (Anthony Lewis)