NNSA


OFFICE OF ADVANCED SIMULATION AND COMPUTING AND INSTITUTIONAL R&D PROGRAMS

February 2026 ASC Quarterly Highlights with picture of an interestingly lit supercomputer
 
 
The Advanced Simulation and Computing (ASC) program delivers leading-edge computer platforms, sophisticated physics and engineering codes, and uniquely qualified staff to support addressing a wide variety of stockpile issues for design, physics certification, engineering qualification, and production. The Laboratory-Directed Research and Development (LDRD) and Site-Directed Research and Development (SDRD) programs fund leading-edge research and development central to the U.S. Department of Energy (DOE) national laboratories’ core missions.

Quarterly Highlights |  Volume 9, Issue  1| February 2026

Welcome to the first 2026 issue of the ASC newsletter - published quarterly to socialize the impactful work performed by the National Nuclear Security Administration (NNSA) laboratories and our other partners.  This edition begins with Los Alamos National Laboratory’s (LANL’s) new approach to enhance transparency for artificial intelligence (AI) literature mining performed by large language models (LLMs), allowing researchers to efficiently extract insights from massive scientific data that would be impossible to process manually.  Other featured highlights in this edition include: 

  • Sandia National Laboratories’ (SNL’s) release of their Next-Generation Workflow tool accelerating the data labeling process, facilitating effective recall of historical results, and increasing reliability of the solutions provided, enabling analysts to address critical design and qualification questions.
  • Lawrence Livermore National Laboratory (LLNL) hosted the annual High-Performance Storage System (HPSS) User Forum this past November with emphasis on “disruptive tech” for exascale and AI needs.
  • LANL's new AI tools to enhance production for national security - AnnoMate and MicroSentryAI.

Please join me in thanking the professionals who delivered the achievements highlighted in this newsletter and on an ongoing basis, all in support of our national security mission.

Dr. Stephen Rinehart
Assistant Deputy Administrator, ASC


LANL citation networks lend transparency to AI literature mining.

LANL has developed a new approach to enhance transparency for literature mining performed by LLMs.  Literature mining with LLMs allows researchers to efficiently extract insights from massive scientific corpora that would be impossible to process manually—discovering hidden patterns, identifying cross-disciplinary connections, and synthesizing findings scattered across thousands of publications in minutes rather than months of human reading.  Although verifying that a citation exists is straightforward, determining whether the citations provided by an LLM are the most relevant and authoritative for a given topic remains a substantial challenge, especially for users without deep prior familiarity with the citation database.  Accuracy is essential for weapons literature mining due to the consequences of using a hallucinated citation to inform an important decision.

Figure 1: A citation network for inertial confinement fusion. Distinct subfields of research are identifiable by each cluster.

To address these transparency challenges, LANL has integrated retrieval-augmented generation (RAG) with citation network visualization.  RAG combines an LLM with a retrieval system, allowing the model to access relevant information from a curated set of scientific articles and generate responses anchored to those sources.  The citation network maps citations into topic-based clusters, visually revealing a citation’s relevance to a particular subject through its position and citation count (Figure 1).  Cluster timelines demonstrate the evolution of technical discourse over time (Figure 2).

Figure 2: Timeline for the ignition campaign cluster. Each cluster in Figure 1 can be visualized as a timeline of publications with areas of focus indicated by color. This enables rapid identification of state-of-the-art, landmark publications, topic trends, and the context surrounding a particular publication.

Figures 1 and 2 illustrate different views of an inertial confinement fusion (ICF) citation network in which seven clusters emerged.  The ICF citation network can be used by a RAG workflow to constrain an LLM’s recommended citations to only those that exist in the network, thereby guiding the LLM’s citation recommendations in a fully transparent manner.  For example, if the user is interested in a particular “ignition campaign” citation recommended by the LLM, then they can easily inspect the citation network to understand the larger context of that citation. 

LANL’s new approach provides researchers with the necessary context to validate LLM outputs, enabling scientists to maintain scientific rigor in high-stakes research environments and ensure the responsible integration of these powerful AI tools into national security applications (LA-UR-25-31498).

 


SNL’s Next-Generation Workflow automates data provenance.

Automated analysis workflows are the key to rapid, accurate computational simulation, but creating one can often be a complex and error-prone endeavor, leading to inefficiencies, compromised quality, and challenges in collaboration and repeatability. The SNL ASC program is addressing these challenges by standardizing processes, encapsulating best practices, and providing robust traceability.  The first release of a new automatic provenance recording capability marks a significant advancement, offering comprehensive pedigree for all workflow data. This data pedigree, captured in an industry-standard machine-readable format, not only provides essential context regarding the quality, trustworthiness, and history of each data element, but also facilitates effective recall of historical results and will be useful for processing via AI. SNL analysts addressing critical Nuclear Deterrence (ND) design and qualification questions can now include a detailed pedigree, enabling future verification of how key results were derived. This enhancement reduces rework while increasing the transparency and reliability of the solutions provided (SAND2025-14268M)

 


span>LLNL hosts the High-Performance Storage System (HPSS) User Forum with emphasis on “disruptive tech” for exascale and AI needs.

Figure 3: Participants at the HPSS User Forum hosted at the LLNL University of California Livermore Collaboration Center.

HPSS remains one of the DOE’s longest running multi-laboratory and industry software development collaborations. Today, some of the world’s largest data-intensive organizations rely on HPSS software to manage hundreds of petabytes of archival data, and the HPSS User Forum (HUF) plays a key role in aligning deployment sites and development sites on future direction. This year’s theme, “Disruptive Tech,” highlighted the evolving landscape of exascale and AI-driven data storage, with sessions on DevOps, containerization, AI and machine learning, data-centric computing, digital engineering, and cloud integration.  A keynote by Brian Spears, Director of LLNL’s AI Innovation Incubator and Technical Director of DOE’s Genesis Mission, focused on the intersection of AI and storage. One of the most valuable outcomes was the candid discussions of feature requests and bug fixes for HPSS that further harden the codebase and adapt it to future challenges in service of the national interest and DOE’s exabytes of data storage managed by HPSS.

LLNL hosted the HUF in early November, marking the first time in the collaboration’s 33-year history that the global HPSS community convened in LLNL at this co-founding HPSS development lab.  The event brought together 58 participants from research, government, and academia, with most attending in-person and some joining remotely.  The meeting was designed to foster deep technical engagement among HPSS deployment specialists, development teams, and site administrators through interactive polling, technical sessions, and informal networking events (LLNL-ABS-2015977).

 


LANL's AI tools enhance production for national security.

Figure 4: AnnoMate – an inspector-driven image-labeling tool used in production and quality control. The figure illustrates how inspectors interact with the tool to annotate defects and how these annotations feed back into the model-improvement loop.

Bowtie production is a critical component of LANL’s stockpile stewardship mission.  Ensuring the quality and fit-for-purpose integrity of these precision parts is a demanding and time-intensive process, often leading to inspector fatigue and variability across assessments.  To address these challenges, LANL developed two complementary tools that blend AI-driven insights with human expertise.

AnnoMate – an inspector-driven image-labeling platform designed to capture “inspector rationale” for part rejection (e.g., chip, scratch, gouge).  These curated annotations form a high-quality dataset that strengthens model accuracy and supports continuous improvement.  The platform allows inspectors to draw precise masks on images of manufactured parts to mark the location and type of defects (e.g., chips, scratches, gouges).  By capturing the inspector’s rationale directly on the image, AnnoMate creates consistent, high-quality annotations that (1) support decisions about whether parts should be accepted or rejected, (2) enable inspectors to compare analyses and train new personnel against “gold standard” examples, and (3) generate new labeled data that continuously improves applied machine-learning models used for automated defect detection.

 

Figure 5: MicroSentryAI – an AI-powered inspection assistant integrated within the AnnoMate workflow. In the figure, the original inspector image appears on the left, while the right panel shows the same image overlaid with the model’s heat map and predicted accept/reject label.

MicroSentryAI – an AI-powered inspection assistant featuring built-in classification and localization models with explainability imagery (e.g., Class Activation Maps).  The tool visually highlights regions of concern (e.g., “the model suggests rejection due to this area”) while allowing inspectors to make the final decision.  The model analyzes each image of a manufactured part and produces two outputs: (1) a predicted classification indicating whether the part is likely acceptable or should be rejected, and (2) an explainability heat map that highlights the specific regions the model considers most responsible for that prediction.  This helps inspectors quickly identify potential defects, understand why the model made a particular recommendation, and decide whether they agree with it.  Inspector feedback is captured and used to strengthen the model through an active-learning loop, improving model performance with each use.

Figure 6: Continuous improvement human-in-the-loop workflow: The inspection process combines human expertise with automated AI suggestions. Inspectors label examples using AnnoMate, the AI learns from these examples, and MicroSentryAI provides increasingly accurate guidance in future inspections. Every new decision, whether agreeing or disagreeing with the AI, feeds back into the system, allowing continuous learning and better performance over time.

The AnnoMate tool has been successfully delivered to inspectors and new data from the Bowtie team is anticipated.  Once received, these data will be integrated to further refine and enhance the underlying model.  The current results are promising, and the system will continue to improve as the team adopts and actively uses the tools developed.
Together, these capabilities form a human-in-the-loop, AI-assisted inspection ecosystem that improves accuracy, consistency, and throughput in Bowtie production.  Beyond inspection, the system also enhances training, enabling performance evaluation, inter-inspector comparison, and fatigue trend analysis—a significant advancement in intelligent, explainable manufacturing quality assurance at LANL (LA-UR-26-20157).

 


Three Laboratories: One AI Model.

Figure 7: One job on three systems — SNL, LANL, and LLNL’s data are kept in three different places. A training run builds one model using those three different systems. Hops (above) is one of the SNL systems in the project (photo by Craig Fritz).

A significant milestone has not only been accomplished but exceeded in an effort towards advancing AI for national security.  Over the past year, SNL, LANL, and LLNL have been working towards building a federated AI model as a pilot project and they now have a prototype. The project used NVIDIA’s NVFlare open-source federated learning software to orchestrate the training process. At each phase, or epoch, of the training process, the software exchanges the updated weights, but not the data, between the three labs to allow them to be averaged together to form a single model.

Figure 8: Collaboration in federated training — Once the data was processed, researchers at all three labs opened remote access for communication and started the client and server processes. The NVFlare software was used for the training, and the final model was then tested by researchers for output quality (graphic by Ray Johnson).

 This updated set of weights is then sent to the other labs for the next epoch of training to begin.  Each of the Tri-Labs possesses unique datasets that are not easily shareable, yet they hold valuable insights for collaboration in critical mission areas. Despite the challenges of training a language model on three different systems that are geographically distributed, the research team has successfully demonstrated a prototype that proves a shared model is possible while protecting each laboratories’ unique data.  Through the federated learning approach, the Tri-Labs have proven that they can collaboratively train the model by sharing only the “weights,” the parameters that represent the model's learning, without explicitly exchanging the datasets themselves.  The ASC program intends to use the federated AI model developed here to enable the Genesis Mission, a national initiative led by the DOE and its 17 National Laboratories to build the world’s most powerful AI scientific platform to accelerate discovery, strengthen national security, and drive energy innovation.  Read the full news release (SAND2026-16143M).

 

Figure 9: Bridging distances — The model is named Chandler, after a city in Arizona and a symbolic center point between all three laboratories. The name channels the spirit that though the data may come from three laboratories miles apart, there is a central meeting place in this model where collaboration and innovation can happen (graphic by Ray Johnson).

 


LANL develops a multi-scale modeling framework for predicting helium bubble evolution in aged plutonium.

Understanding and quantifying materials aging is a significant challenge of nuclear stockpile stewardship.  Among the many physical processes contributing to the aging of plutonium (Pu)-alloy components, the generation, accumulation, and migration of helium (He), a byproduct of radioactive decay, is of particular interest.  Over time, He forms nanometer-scale bubbles that coalesce, leading to swelling, embrittlement, and potential changes in material properties.

Figure 10: (a) Schematic of the change in bubble diffusivity (D) as a function of bubble radius (r) for volume (vol) and surface (surf) diffusion mechanisms implemented in the mesoscale bubble merger code. (b) Representation of these mechanisms at the atomistic scale. He atoms are shown in green, Pu atoms in the bulk in blue, and Pu atoms at the bubble surface in red.

At LANL, the ASC program is developing a multi-scale modeling framework to predict how He bubbles evolve with age and temperature in Pu alloys.  The framework bridges atomistic and mesoscale simulations, allowing researchers to connect fundamental defect structures and diffusion mechanisms to macroscopic material behavior.

He bubble migration in metals occurs through three main pathways: volume diffusion, surface diffusion, and evaporation-condensation.  In Pu alloys, the first two are likely dominant.  Both mechanisms involve diffusion of the metal atoms from one side of the bubble to the other through the metal matrix (volume) or along the metal surface exposed to the gas (surface) in the bubble (Figure 10). LANL’s approach to understand the migration and coalescence of He bubbles uses molecular dynamics (MD) simulations in conjunction with analytical evaluation to quantify the self- and surface-diffusion coefficients for He bubbles as a function of radius.  Because the complex electronic structure of Pu makes it notoriously difficult to model, the team developed specialized interatomic potentials to accurately capture metal-He interactions across multiple phases. These atomistic models then provide the critical input data to the mesoscale Bubble Merger code developed at LANL, enabling a predictive understanding of bubble migration and coalescence over longer timescales.

Figure 11: (a) Diffusivities of He bubbles informed from MD simulations compared to data in the literature [1-3]; (b) Bubble growth predicted by the mesoscale Bubble Merger code informed by diffusivities from MD simulations and literature data.


Preliminary results (Figure 11) reveal that bubble diffusivities span multiple orders of magnitude, with a complex dependence on phase, bubble radius, and morphology, highlighting the intricate interplay of mechanisms driving microstructural evolution in aged Pu alloys.  As Figure 11(a) shows, MD simulations identify multiple plausible mechanisms that can contribute to diffusivity of He bubbles in the δ-phase of a Pu alloy, with volume diffusion and diffusion of faceted bubbles comparing favorably to experimental data.  By integrating atomic-scale physics, the mesoscale Bubble Merger model qualitatively predicts the phase-dependence [4] of He bubble growth (Figure 11(b)) at spatial densities and timescales inaccessible to atomic-scale modeling. This framework provides an essential predictive tool for assessing long-term performance of Pu alloy components and guiding future surveillance and manufacturing efforts.  LANL’s future work will focus on introducing additional physics into the mesoscale model and algorithmic developments to enable quantitative comparisons to experimental data (LA-UR-26-20157).

[1]    W.Z. Wade, D.W. Short, J.C. Walden, J.W. Magana, Self-diffusion in plutonium metal, Metallurgical Transactions A 9(7) (1978) 965-972.
[2]    J.D. Kress, J.S. Cohen, D.P. Kilcrease, D.A. Horner, L.A. Collins, Quantum molecular dynamics simulations of transport properties in liquid and dense-plasma plutonium, Physical Review E 83(2) (2011) 026404.
[3]    D.W. Wheeler, P.D. Bayer, Evaluation of the nucleation and growth of helium bubbles in aged plutonium, Journal of Alloys and Compounds 444-445 (2007) 212-216.
[4]    D.W. Wheeler, P. Roussel, Review of the behavior of helium bubbles in aged plutonium and their influence on material properties, Journal of Vacuum Science & Technology A 43(2) (2025) 020802.


LLNL receives four HPCwire awards for exascale supercomputing and open-source leadership.

LLNL was recognized with four prestigious awards from HPCwire at the 2025 International Conference for High Performance Computing, Networking, Storage and Analysis (SC25) in St. Louis. Among the honors, LLNL received the Editor’s Choice Award for Top Supercomputing Achievement, celebrating the launch of El Capitan, the world’s fastest supercomputer and the first exascale system for the NNSA.  El Capitan set a new standard in computational performance, continuing at No. 1 on the TOP500 list and achieving a rare “triple crown” by leading three major benchmarks for both traditional and AI workloads.

Figure 12: LLNL’s Spack team received the Editors’ Choice Award for Best HPC Programming Tool or Technology for the third year in a row at SC25. Pictured are Spack team members Alec Scott, Caetano Melone, Kathleen Shea, Todd Gamblin, Tom Tabor (HPCWire presenter), Phil Sakievich (SNL), Greg Becker, John Gouwar (Northeastern University).


In addition to hardware achievements, LLNL and its collaborators earned the Readers’ Choice Award for Best Use of High-Performance Computing (HPC) in Physical Sciences for their groundbreaking real-time tsunami forecasting digital twin. Powered by El Capitan, this simulation processes ocean-floor data in under 0.2 seconds, enabling rapid and accurate tsunami warnings—a feat that is about 10 billion times faster than previous methods. This innovative work is also a finalist for the Gordon Bell Prize, underscoring the life-saving potential of exascale computing in critical applications.

LLNL’s leadership in open-source software was also highlighted, with the Lab’s Spack package manager winning the Editors’ Choice Award for Best HPC Programming Tool or Technology for the third year in a row. The latest Spack 1.0 release marks a major milestone in stability and reproducibility for HPC software environments. Additionally, the Readers’ Choice Award for Best HPC Collaboration recognized the High Performance Software Foundation (HPSF), where LLNL scientists play key roles in advancing open-source HPC projects. These awards underscore LLNL’s commitment to innovation, collaboration, and the advancement of HPC for science and national security (LLNL-ABS-2015975).


SNL improves thermal radiation modeling in participating media with graphics processing unit (GPU)-powered Monte Carlo method.

A new GPU-based Monte Carlo ray-tracing method has been developed to model thermal radiation in participating media, such as fire applications.  This method significantly speeds up simulations, potentially doubling the efficiency of pool fire modeling compared to traditional methods.  It addresses accuracy issues found in existing techniques, particularly for local heat flux measurements, by eliminating numerical uncertainties.  This capability is crucial for accurately simulating thermal radiation effects in various applications, including combustion rates in pool fires and laser manufacturing.  The method is being integrated into an upcoming ASC project focused on re-entry systems in hostile environments.  Discussions with ASC’s SIERRA team will investigate a path forward and make this a stand-alone tool available for integration into the ASC SIERRA framework (SAND2025-14268M).

 

Figure 13: Contours of incident thermal radiative flux to the pool surface for a 2m diameter hydrocarbon pool fire. Top images: GPU-Based Monte Carlo Ray Tracing Method using 10M (left) and 100M rays (right). Additional rays improve the local convergence, and integrated fluxes indicate good overall convergence. Bottom: The standard Discrete Ordinates (DO) approach, using an 8th order quadrature is shown. For DO, integrated flux is approximately 16% higher than values computed with the Monte Carlo method. Ray effects can be observed in DO results and are a known limitation of the DO method.

 


LANL’s radiation transport project adapts to unique features of El Capitan to improve performance.

Figure 14: Jayenne runtime comparisons on different supercomputing systems and configurations.

The Implicit Monte Carlo (IMC) method, used to accurately model nonlinear thermal radiative transfer in high-energy-density physics simulations, is often the single-most expensive component of a multiphysics simulation.  In IMC, particles represent radiation energy, and their interactions with material are modeled stochastically.  Models of inertial confinement fusion (ICF) experiments are especially expensive as they require very high particle counts to adequately resolve the radiation field. 

To address this high cost, the LANL IMC package Jayenne was optimized for performance on the El Capitan system.  Jayenne was previously ported to run on GPUs for the Sierra supercomputer around 2020.  The resulting implementation showed excellent speedup over contemporary central processing unit (CPU) technology (up to six times node-for-node speedup on some problems). With the arrival of El Capitan at LLNL, the Jayenne team worked to further improve the package’s GPU capability and take advantage of El Capitan’s unique features. 

Figure 15: Throughput improvement for Jayenne IMC package on El Capitan.

Three notable improvements were made for El Capitan.  At the lowest level, El Capitan’s AMD MI300 GPUs have 64 threads that operate together (twice as many as Sierra GPUs).  With 64 threads, there is a higher chance for thread divergence (different threads executing different code).  To account for this, the team tuned an algorithm parameter in the GPU transport kernel to reduce thread divergence, offering a 15 percent speedup over the Sierra code version.  The MI300 also has a unified memory space, meaning the CPU and GPU can see all the memory without transfer operations. The Jayenne team eliminated redundant memory allocations and modified GPU functions for single-memory space execution, reducing copy operations and yielding a slight performance improvement of around 5 percent.  El Capitan further offers different GPU configurations, which were explored with and without oversubscribing the partitioned hardware with more than one message passing interface (MPI) task per logical GPU.  The most optimal configuration resulted in another 35 percent improvement (Figure 14). 

Combining these optimizations improved Jayenne’s performance by two times compared to that on Sierra, node-to-node. Compared to current CPU systems (e.g., Tycho and Crossroads), the performance is four times better node-to-node.  Given the higher memory capacity per node, a more general throughput improvement is observed, as shown in Figure 15.  With these code optimizations and El Capitan’s large memory capacity, Jayenne can solve more unknowns per unit time. Further, the large, single memory space afforded by El Capitan also allows for very high particle counts where the GPUs are most effective. This combination of achieved speedup and high particle counts will aid ICF experiment designers in performing fast, accurate simulations (LA-UR-26-20157).


SNL advances modeling for reliable reentry systems.

Advanced modeling enhances safety and reliability of hypersonic reentry systems.

Ensuring the survivability, reliability, and accuracy of our nation’s hypersonic reentry weapon systems during explosive events is a key responsibility at SNL.  A new ASC capability to model reentry through blast-perturbed atmospheres has been developed.  This new model will bring more accuracy and credibility to the evaluation of weapon system requirements and definition of bounding case scenarios for environmental specifications.  While the effects of a blast-perturbed atmosphere have been modeled and accounted for in the past, development of new, higher-fidelity, modeling and simulation tools enable greater modeling accuracy to make better informed design decisions for programs such as the W87-1 and the W93 (SAND2025-14268M).


Welcome Aboard...

LANL ASC program

Rob Aulwes

Rob Aulwes is the new Project Leader for the Performance Engineering team within the Integrated Codes (IC) subprogram at LANL.  Rob received his PhD in Applied Mathematics from the University of Iowa and joined the lab in 2002 working on LA-MPI.  Rob was also one of the original developers of OpenMPI.  He served as the project leader for Institutional Computing as well as the team leader for the Future Architectures and Applications Team in CCS-7.  Rob has contributed to GPU porting efforts in the climate codes CICE and MPAS-Ocean, the latter effort being part of the E3SM Exascale Project.  He has brought his GPU expertise to the Performance Engineering team to help code teams with their GPU porting effort.  

 

 

SNL ASC program

Adam Stephens and family

Adam Stephens is the sub-element lead for SNL’s Verification and Validation’s (V&V) Education and Outreach.  As such, his objectives are to discover the gaps in Nuclear Deterrence (ND) analysts’ understanding of V&V Uncertainty and Qualification (UQ) and steward and create content and resources to effectively bridge those gaps.  Adam’s education includes a BS in Chemical Engineering from Texas Tech University, 2002, and a PhD in Chemical Engineering from the University of Texas at Austin, 2012.  Outside of work, Adam enjoys reading and roasting coffee, along with hiking with his wife and two children.

 

 

LLNL ASC program

Ramesh Pankajakshan

Ramesh Pankajakshan has been appointed Lead of LLNL’s El Capitan Center of Excellence (COE).  The El Capitan COE prepares users and applications for El Capitan-class hardware by enabling early access, providing technical guidance, and coordinating support.  Ramesh has served as Deputy Lead for the past two and a half years, effectively acting as the COE’s technical leader and working closely with AMD, HPE, Livermore Computing staff, and Tri-Lab code teams.  He brings deep expertise in preparing applications for new architectures and played a key role in porting and optimizing the SW4 code for Sierra and Frontier systems.  Before joining LLNL in 2016, he was a Research Professor whose modeling and simulation work for the Office of Naval Research (ONR), the National Aeronautics and Space Administration (NASA), and DOE covered naval hydrodynamics, Class 8 truck aerodynamics, solid rocket motors, and agent-based modeling.


Spotlight at LLNL: Judith (Judy) Hill Named Associate Program Director for Livermore Computing Systems and Environment and Deputy for High-Performance Computing.

On December 21, 2025, Judy Hill was appointed as LLNL’s Associate Program Director (APD) for Livermore Computing (LC) Systems and Environments in Strategic Deterrence’s Weapon Simulation and Computing (WSC) program and Deputy for High-Performance Computing (HPC) in the Computing Principal Directorate.

Judy Hill

 

In these complementary roles, Judy will lead and manage all aspects of LC’s mission to support HPC environments for the LLNL and NNSA mission, including classified and unclassified operations, services, procurements, and long-term strategy.  She will work closely with customers in the Weapons Program to ensure that programmatic objectives are achieved and that LC is meeting expectations, and she will be responsible for running the Multiprogrammatic and Institutional Computing (M&IC) program.  In addition to assuring that LC supports the vitality of HPC across LLNL, she will interact with federal program managers in the ASC program on relevant issues as well as with colleagues, internal management, and sponsor organizations internal and external to LLNL.  Judy’s role reports programmatically to the WSC Associate Director and organizationally to the Principal Associate Director for Computing.

Judy has successfully taken several line and program roles since joining LLNL as a computational scientist in 2021.  She leads the El Capitan Center of Excellence for LC, manages the Large-Scale Calculations Initiative for WSC, leads the LLNL Grand Challenge Program, and is a group leader in LC.  Prior to joining LLNL, she led the Scientific Computing Group at Oak Ridge National Laboratory (ORNL) and served as Program Manager for the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) Program at the Leadership Computing Facilities at ORNL and Argonne.  While at ORNL, Judy established and led the Exascale Computing Project (ECP) Application Integration effort aimed at leveraging existing application readiness efforts at three computing facilities for the ECP application projects.  Judy’s deep commitment to professional service and extensive engagements with the broader scientific community will help ensure LLNL remains at the forefront of innovation, collaboration, and influence in HPC.

Judy earned her Ph.D. in computational science and engineering from Carnegie Mellon University.  She succeeds Terri Quinn, who retired from the LLNL after 40 illustrious years of service (LLNL-ABS-2015976).


NNSA LDRD/SDRD Quarterly Highlights

LANL LDRD: pRad and the future of stockpile stewardship.

LANL’s proton radiography (pRad) facility was created more than 25 years ago through a series of LDRD projects led by LANL physicist, Chris Morris.  “It was the rebirth of my career,” says Morris.  “Really, it was part of the rebirth of the Lab.” Now, after 25 years and nearly 1,000 experiments, pRad, as part of the Los Alamos Neutron Science Center (LANSCE), is the focus of a multiyear signature institutional commitment that will direct funding toward modernizing these key facilities that have never been more critical to LANL’s mission.  In the wake of 1996’s Comprehensive Nuclear-Test-Ban Treaty, the U.S. was faced with the challenge of maintaining the safety and reliability of its nuclear stockpile without detonating live weapons.  Experimental facilities like pRad allow scientists to conduct multitudes of targeted tests on a smaller scale.  The pRad facility uses high-energy protons passed through an explosion to capture crucial data on how materials behave under extreme conditions, and the data it creates feeds the models that ensure America’s nuclear weapons remain reliable—without the need for full-scale denotations.  Read more in the LANL news highlight.

Figure 16:  LANL’s proton radiography experiment allows scientists to see how materials behave during detonations.  Watch this video to learn more about how pRad works (Image courtesy: LANL).

 


LLNL LDRD: Lab scientists win four 2025 R&D 100 awards.

Figure 17: Shown here in the primary mirror surface on a monolithic telescope — one of LLNL's four R&D100 awards — are reflections of Brian Bauman (left), the space hardware principal optical engineer and inventor of the monolithic telescope, and Frank Ravizza, the space hardware optical engineering lead (Courtesy image).

LLNL scientists and engineers have earned four awards among the top 100 inventions worldwide.  The trade journal R&D World Magazine recently announced the winners of the awards, often called the “Oscars of innovation,” recognizing new commercial products, technologies and materials that are available for sale or license for their technological significance.  With this year’s results, LLNL has now collected a total of 186 R&D 100 awards since 1978.  Submitted through LLNL’s Innovation and Partnerships Office (IPO), these awards recognize the impact that LLNL innovation, in collaboration with industry partners, can have on the U.S. economy as well as globally.  Read more in the LLNL news highlight.

 


SNL LDRD: Protecting the grid with AI.

Figure 18: SNL cybersecurity expert Adrian Chavez, left, and computer scientist Logan Blakely work to integrate a single-board computer with their neural-network AI into the Public Service Company of New Mexico’s test site. This code monitors the grid for cyberattacks and physical issues (photo by Bret Latter).

Creating new capabilities to protect the electric grid from severe storms and advanced attackers is critical, so the brain-inspired AI algorithms that detect physical problems, cyberattacks, and both at the same time are an amazing achievement.  This neural-network AI developed by SNL researchers can run on inexpensive single-board computers or existing smart grid devices.  “As more disturbances occur, whether from extreme weather or from cyberattacks, the most important thing is that operators maintain the function and reliability of the grid,” said Shamina Hossain-McKenzie, a cybersecurity expert and leader of the project.  Read more in the SNL news highlight.

 


Questions? Comments? Contact Us.

ASC Assistant Deputy Administrator: stephen.rinehart [at] nnsa.doe.gov (Dr. Stephen Rinehart)

ASC Deputy Assistant Deputy Administrator: thuc.hoang [at] nnsa.doe.gov (Thuc Hoang)

Program Director for Computing: simon.hammond [at] nnsa.doe.gov (Dr. Si Hammond)

Program Director for Simulation: anthony.lewis [at] nnsa.doe.gov (Anthony Lewis)

  • Integrated Codes: james.peltz [at] nnsa.doe.gov (Dr. Jim Peltz)
  • Physics and Engineering Models: robert.spencer [at] nnsa.doe.gov (Robert Spencer)
  • Verification and Validation/PSAAP/CSGF: david.etim [at] nnsa.doe.gov (Dr. David Etim)
  • Capabilities for Nuclear Intelligence: anthony.lewis [at] nnsa.doe.gov (Anthony Lewis)
  • Computational Systems and Software Environment: simon.hammond [at] nnsa.doe.gov (Dr. Si Hammond), sara.campbell [at] nnsa.doe.gov (Sara Campbell), cheri.hautala-bateman [at] nnsa.doe.gov (Dr. Cheri Hautala-Bateman)
  • Facility Operations and User Support: michael.lang [at] nnsa.doe.gov (K. Mike Lang)
  • LDRD/SDRD: anthony.lewis [at] nnsa.doe.gov (Anthony Lewis)