NNSA


OFFICE OF ADVANCED SIMULATION AND COMPUTING AND INSTITUTIONAL R&D PROGRAMS (NA-114)

 
 
The Advanced Simulation and Computing (ASC) program delivers leading-edge computer platforms, sophisticated physics and engineering codes, and uniquely qualified staff to support addressing a wide variety of stockpile issues for design, physics certification, engineering qualification, and production. The Laboratory Directed Research and Development (LDRD) and Site-Directed Research and Development (SDRD) programs fund leading-edge research and development central to the U.S. DOE national laboratories’ core missions.

Quarterly Highlights |  Volume 7, Issue 2 | May 2024

Welcome to the May 2024 issue of the NA-114 newsletter - published quarterly to socialize the impactful work being performed by the National Nuclear Security Administration (NNSA) laboratories and our other partners.  This edition begins with a highlight from Los Alamos National Laboratory (LANL) on their development of a Super-Fibonacci measurement technique for use with a coordinate-measuring machine to obtain highly accurate measurements of a spheroidal surface - found to be more efficient and cost-effective for classification of defective spherical parts.  Other highlights include: 
 

  • SNL’s improvements to an existing turbulence model (inspired by high-fidelity simulations) provide increased accuracy for high-speed vehicle analysis over the wider design space required for future systems like the W93.
  • LLNL’s recent release of an improved opacity table for iron demonstrates the capabilities of their new opacity code, Opus.  Accurate opacities are crucial for designing and modeling experiments on NIF, OMEGA, and Z.
  • LANL’s new “Enduring Knowledge Base” (EKB) provides workers with rapid access to data and supports rapid innovation in artificial intelligence/machine learning (AI/ML) workflows.

The banner image above shows contractors at LLNL working on a rack of the El Capitan supercomputer in late 2023.  As El Capitan deployment continues, 128 nodes of the same AMD MI300A-based node architecture are now available to Tri-lab users via LLNL’s “RZAdams” computer, which can be used for testing Tri-Lab applications targeting El Capitan.  You can also read more about the “Road to El Capitan” at LLNL’s multi-part series published online here.
Please join me in thanking the professionals who delivered the achievements highlighted in this newsletter and on an ongoing basis, all in support of our national security mission. 
 

Thuc Hoang
NA-114 Office Director


LANL developed a new Super-Fibonacci design to decrease the time required to obtain highly accurate measurements of spheroidal or hemi-spheroidal surfaces with a coordinate-measuring machine.   

Optimized sampling supports improved classification and rigorous quantification of uncertainty and risk. 

Figure 1: A two-layer, Super-Fibonacci design

LANL uses a variety of measurement techniques to qualify manufactured components to meet stringent standards of acceptance.  One such technique uses the coordinate-measuring machine (CMM) to obtain highly accurate measurements of a surface.  These CMM measurements are costly and time consuming, often requiring five or more worker-hours to measure the surface of a relevant part.  The measurement task involves carefully moving the surface and the CMM stylus multiple times and waiting for the CMM stylus to slowly move along a predetermined and potentially lengthy path.  For this reason, optimized sampling plans coupled with statistical models are valuable tools for saving both time and money, while also allowing for improved classification and rigorous quantification of uncertainty and risk.  

LANL’s Super-Fibonacci design (Figure 1) for spheroidal or hemi-spheroidal surfaces has excellent statistical properties, allows for the measurement stylus to be lifted fewer times, and shortens the path length compared to current designs.  In a simulation study of 500 simulated parts, these Super-Fibonacci designs were found to be just as effective as previous designs for classification of defective spherical parts, while being much more efficient and cost-effective.  (Article continued on following page)

In addition to the improved designs, the use of statistical machine learning models (e.g., Gaussian processes) allows for probabilistic assessment of the surface of each part.  This approach could prove useful as an aid for experts in deciding which parts are defective.  This method also allows for sequential design methods that ensure that resources like time and money are spent as efficiently as possible.  (LA-UR-24-22756)

 

Figure 2: A statistical model is used to evaluate the probability that the tolerance threshold is exceeded somewhere on the surface. The model can also determine the optimal placement of future measurements.

 


SNL modifications to an existing turbulence model inspired by high-fidelity simulations offer increased accuracy for high-speed vehicle analysis.  These improvements support predictions over the wider design space required for future systems such as the W93.

Figure 3: The improved model predicts the boundary layer separation ahead of the flare observed in the experiment, and also gives better estimates the peak wall heat flux.

Current modernization programs such as W87-1 require accurate prediction of aerodynamic loadings to support development of component specifications.  Future systems, such as the W93, will require new models capable of predictions over a wider design space than legacy systems.  A newly modified version of an existing turbulence model offers increased accuracy when applied to high-speed vehicle analysis.  The new model was inspired by data gathered from recent high-fidelity simulations of such flows.  Reduction in aerodynamic prediction errors due to turbulence modeling will allow mod/sim to play a more prominent role in determining aero-mechanical environments, reducing the quantity of flight tests needed to qualify components. (SAND2024-02436M)

 


LLNL released an improved opacity table for iron demonstrating the capabilities of their new opacity code, Opus.  Accurate opacities are needed to model experiments on NIF, OMEGA, and Z.

Figure 4: Opacity differences in the Rosseland means (weighted averages over photon energies) for the element Iron between Opus and the older code, Tycho.

Radiative opacities are a key ingredient needed for radiation-hydrodynamics simulations, including those for designing and modeling experiments at NIF, OMEGA, and Z.  At the high temperatures reached in these experiments, radiation dominates the energy transport.  Accurate opacity calculations, necessary for these cases are too computationally expensive to be executed on-the-fly during simulations, so precalculated opacity tables are employed. 

The LLNL PEM team has systematically improved the table production workflow.  The new opacity code, Opus, and the scripts that run it work in tandem to improve LLNL’s table outputs and are under version control, allowing comparisons with historical models.  The parallelized scripts exercise the code over thousands of different inputs, allowing the identification of errors and bugs, which are highlighted by new plotting tools, and converge the Opus calculations at each density and temperature point.  
LLNL recently released a new iron opacity table, following this prescription.  Figure 4 shows the differences in the Rosseland means (weighted averages over photon energies) between Opus and the older code, Tycho.  Atomic structure refinement changed the opacity by up to 30% at low densities and low temperatures, while a better numerical treatment of resonances changed the opacity by as much as 50% at high densities.  This analysis also provides greater temperature-density resolution (100x81 instead of 41x41) to reduce interpolation errors (not shown).  (LLNL-ABS-863296)

 


LANL’s Weapons Program Data Governance Council created the “Enduring Knowledge Base” need-to-know category for workers and data.  

Enduring Knowledge Base (EKB) workers gain rapid access to EKB data, supporting rapid innovation in AI/ML workflows.

LANL ASC verification and validation efforts are predicated on access to accurate datasets generated by a wide variety of sources.  Need-to-know access to these datasets is typically set by a data source gatekeeper and often requires LANL ASC workers to contact each source to gain access.  This requires workers to (1) know that a source exists and (2) know who to ask for access to complete their mission-critical work.  

These data have remained difficult to access due to historical momentum and lack of data governance.  Until recently, there was no unified effort to create need-to-know categories based on worker roles and to connect those roles to the datasets required for ASC workflows.  

With efforts like the Common Modeling Framework (CMF) in mind, LANL’s Weapons Program Data Governance Council (WPDGC) has created the EKB need-to-know category for workers and data.  Workers granted EKB need-to-know automatically gain access to data that WPDGC has determined meet the EKB criteria.  The EKB framework then allows LANL ASC employees with EKB access to use application programming interface (API) connections to pull data into their workflows without requesting access based on the specific data source.  EKB allows for faster development of workflows, especially as larger and more diverse datasets become necessary for AI/ML workflows.  Weapons Research Services is working with WPDGC and subject matter experts to add more data into the EKB archive so that innovation in the AI/ML age can continue to strengthen America’s strategic stockpile efforts.  (LA-UR-24-21782


SNL derived an analytical solution for calculating the intensity parameter describing the random vibration of a reentry vehicle.  This improves model fidelity and provides faster, more accurate simulations for environmental specifications of the W93 and W87-1. 

Figure 5: The new intensity parameter formulation accurately predicts the optimal solution for five example problems; this drastically reduces the amount of time to compute transitional loading.

During flight, a reentry vehicle will undergo a period in which the boundary layer transitions from laminar to turbulent behavior.  This region moves along the projectile exhibiting pressure loads which can yield some of the most significant vehicle excitations.  Currently, a transition pressure loading model, SPOTS, exists for determining the reentry random vibration of a reentry vehicle, but it is missing important physical phenomenon, such as the moving transition front and change in length of the transition region during flight.  These phenomena are not accounted for due to the computational cost associated with obtaining the intensity parameter (expectation of the number of turbulent spots born per second) for these simulations.  An analytical solution has been derived for calculating the intensity parameter in the SPOTS model, which unlocks the ability to include the moving transition front and change in length in SPOTS.  This improves model fidelity and will provide a faster, more accurate predictive capability that can aid in the development of environmental specifications for reentry systems in the W93 and W87-1.  (SAND2024-00738M)

 


The 128-node RZAdams computer is now available to the Tri-lab ASC program.  Users can now test their codes on a system with the same AMD MI300A-based node architecture as El Capitan.

Figure 6: The HPE/Cray system RZAdams serves as a highly valuable code development resource for applications preparing for the El Capitan exascale system being assembled at LLNL by Livermore Computing.

LLNL took delivery of the RZAdams computer hardware in January and released the system to early users in late February.  This system is the largest and best platform for code porting and initial optimization for El Capitan prior to the full system being available to users.  The system is significant: RZAdams has the second highest peak capability of any current Livermore Computing system—and is the most capable unclassified system.  Overall, RZAdams edges out Lassen in performance capability but trails Sierra.  The system has 128 HPE/Cray Parry Peak nodes, which each have 4 AMD MI300A production-release processors and 512GB of HBM3 memory.  The system is currently configured with 2 login and 126 debug nodes plus associated infrastructure.  The nodes and infrastructure are essentially identical to the El Capitan compute nodes.  While purchased by LLNL, the system is currently delivering significant value to the Tri-Lab ASC program as a software development and testing resource for Tri-Lab applications targeting El Capitan.  RZAdams was available to users for the recent Center of Excellence (CoE) hackathon sessions held in New Mexico on March 19-21, and at LLNL on April 9-11.  
(LLNL-ABS-863293

 


To aid ongoing B83 full-system solid mechanics model updates and verification efforts, SNL analysts developed tools to streamline model assembly and analysis execution operations.  This capability will increase analyst efficiency.

Several new features were included in this update: automated Sierra/Solid Mechanics (SM) input deck generation with enhanced syntax for multiple material models, nonlinear contact, and drop loading; and automated output options, such as image generation to confirm contact enforcement.  This capability streamlines the system model construction by efficiently verifying and joining various subassembly models in a hierarchical approach.  This process was previously used to update B83 Sierra/Structural Dynamics (SD) models and the expanded toolset will standardize model organization and facilitate sharing model information between the B83 SD and SM models.  The capability will increase analyst efficiency and support verification of SM modeling setup and analysis execution.  The new capabilities, which extend existing tools actively being used to manage other nuclear deterrence-related SD modeling and simulation analyses, are generalizable and may be leveraged to aid other SM modeling and simulation efforts.  (SAND2024-02436M)

Figure 7: Automatically generated images of contact enforcement in example drop impact analysis, aiding subassembly-level model verification before integration into system-level model.

 

 


LANL developed two new algorithms to accurately simulate large deformations in mesoscale grain structures.  Accurate simulations of polycrystalline materials under extreme dynamic load are central to predicting the safety of plastic bonded explosives in a wide variety of accident scenarios.

Figure 8: A mesoscale simulation view of deformation occurring in the microstructure of a polycrystalline material being crushed by a relatively low-velocity impact. The simulation uses enhanced algorithmic methods developed at LANL to accurately represent large deformation of anisotropic solid materials in concert with arbitrary Lagrangian Eulerian (ALE) hydrodynamics.

Understanding the deformation behavior of polycrystalline materials (e.g., explosives, metals) under dynamic loading conditions is important to stockpile stewardship in many respects.  Changes in material microstructure can affect a broad range of deformation behavior characteristics.  For plastic bonded explosives (PBXs), modeling these microstructure effects is central to predicting the safety of new materials in a wide variety of accident scenarios. 

Figure 8 illustrates how a polycrystalline material could respond to very large dynamic compression: with severe and heterogeneous plastic deformation.  The localized extreme values of plastic deformation cause localized heating and large temperature fluctuations and could affect the sensitivity of an explosive.  Understanding and quantifying such effects requires mesoscale simulations, which explicitly resolve microstructure features such as individual grains and voids in the material. 

One particular challenge confronting the advancement of mesoscale simulations is the interplay between material physics (constitutive) model complexity and the numerical robustness of the accompanying algorithms.  More complex, accurate models of the underlying physics typically lead to less reliable simulation algorithms.  Extreme deformation of PBXs around collapsing voids and within shear bands is difficult to simulate at the mesoscale owing to severe mesh-entanglement in Lagrangian hydrodynamic schemes and poorly controlled advection errors of critical material model variables in Eulerian (or arbitrary Lagrangian Eulerian, ALE) approaches. 
LANL is developing new physics models and accompanying solution algorithms for improved accuracy and robustness in mesoscale simulations of polycrystalline materials undergoing extremely high deformation.  In this work, robust algorithms for solving the grain-scale material model equations within a Eulerian (or ALE) framework were developed.  These algorithms have small and well-controlled advection errors and are compatible with the needed complex constitutive models.  The two new solution algorithms can accommodate extremely large deformation of individual grains and allow for a more accurate representation of material properties at the mesoscale under extreme conditions.  These algorithms were recently published in the International Journal of Solids and Structures and are now being implemented into ASC-developed multiphysics codes.  (LA-UR-24-20824)

Additional reference: M. Zecevic, M. J. Cawkwell, and D. J. Luscher, Eulerian finite element implementation of a dislocation density-based continuum model, International Journal of Solids and Structures 288, 1125920 (2024) https://www.sciencedirect.com/science/article/pii/S0020768323004870
 

 


SNL improved the modeling of PMDI-4, -6, and -10, a family of foam encapsulants used to protect sensitive electronic components.  The models improve encapsulation processes by optimizing vent and gate positions and predicting defects.

PMDI-4, -6, and -10 are a family of foam encapsulants used to protect sensitive electronic components against shock and vibration.  Researchers from SNL Engineering Science and Material Science are developing high-fidelity foam encapsulation models that can predict defects in encapsulation processes such as knit lines and voids.  The models developed here can improve encapsulation processes for ND components by optimizing vent and gate positions and predicting defects.  These models can also be used to mitigate defects by changing processing parameters such as tilt angle, temperature, oven time, and degree of overpacking.  (SAND2024-04029M)

Figure 9: A high-fidelity foam filling model has been developed for PMDI-4, which can capture the non-isothermal foam expansion and curing behavior of the material.

 


LANL scientists have shown that in fluid simulations with ejecta particles the formulation of artificial viscosity can alter quantities of interest by as much as 90%.    

Improved understanding of this effect will help LANL accurately track ejecta in important national security calculations.

Fluid dynamics simulations are central to many national defense applications and artificial viscosity is an important technique to make these simulations practical and robust.  Some LANL applications require simulating the motion of “ejecta” particles through a fluid.  LANL discovered that simulated particle motion depends on choices made when setting artificial viscosity. 

Artificial viscosity is used to prevent numerical artifacts, namely unphysical oscillations in fluid fields, to allow for stable calculations of shocked fluids.  Artificial viscosity smooths out discontinuities in pressure, density, and other fields, but does not affect the values of those fields far from the discontinuities.  In reality, physical viscosity smooths out these fields.  However, the length scale of shock discontinuities is often several orders of magnitude (or more) smaller than the length scale of interest for a given problem which makes it impractical to resolve discontinuities with a fluid’s physical viscosity. 

Because artificial viscosity smooths out discontinuities, its formulation affects the modeled behavior of a particle in fluid.  The primary interaction forces between a particle and a fluid are the drag force, which depends on the relative velocity between the fluid and the particle, and the buoyant force, which depends on the pressure gradient of the fluid around the particle.  Although drag is often the dominant force in unshocked systems, when a shock passes over a particle, the buoyant force becomes more dominant than drag.  The simulated buoyant force on a shocked particle depends on the artificial viscosity formulation because artificial viscosity smooths out the pressure gradient. 

LANL studied this effect in several systems and found that when the density ratio between a particle and the fluid is on the order of 10–100, the artificial viscosity formulation can alter quantities of interest by as much as 90 percent.  As the density ratio increases, the effect becomes less prevalent, and for a density ratio on the order of 1000 or higher, variations due to the artificial viscosity formulation become comparable to other numerical errors.  Understanding the artificial viscosity and particle motion relationship will help LANL accurately track ejecta in important national security calculations.  (LA-UR-24-20824)

Figure 10: Effects of the choice of artificial viscosity coefficients on (left) simulated pressure profile at a shock and (right) a particle cloud passed over by the shock.

LANL scientists developed a new model for the mixing of unstable interfaces during transition from laminar to turbulent flow by adding important physics.  This improves modeling of thermonuclear burn yield of the capsule for inertial confinement fusion.

Figure 11: Heavy fluid (red) mixing into light fluid (blue). Small differences between the interface shown in the top and bottom figures at early time (left) can strongly affect the width of the mixing region (middle), whereas once the fluid becomes turbulent (right), the behavior becomes universal, and the two cases look indistinguishable.

Mixing at the interfaces between two materials plays an important role in many stockpile stewardship applications.  Interfaces that are initially at rest but undergo rapid acceleration will transition from laminar to turbulent flow, as shown in Figure 11.  The details of this laminar-turbulent transition process determine the rate of mixing at the interface, which in turn drives other important physics.  For example, in the context of ICF, the rate of thermonuclear burn, and therefore the yield of the capsule, is strongly coupled to the level of mixing.

LANL has developed high-quality models for fully turbulent flow, such as the Besnard-Harlow-Rauenzhan (BHR) model family.  These models are successful because turbulence is universal; that is, different cases all look and act similarly.  Transition, in contrast, is much harder to model because its behavior is highly sensitive to small differences in flow (Figure 11).  Unfortunately, while the details of the differences disappear with time, fingerprints of the flow history remain that can affect predictions.  Conventional turbulence models need to be adjusted in a case-specific manner to match transition.

A new approach for transition modeling developed by LANL scientists uses methods in stability theory to extend the BHR turbulence model by adding important physics that are only relevant in transition. For example, terms involving fluctuations in pressure play a key role in the transition process.  These terms behave completely different during transition than they do in turbulent flow.  The pressure contribution during transition is non-local, resulting in a transport of energy.  This non-local behavior is not considered in standard turbulence models such as BHR, but new modeling approaches that include these effects provide a much better match to data, as shown in Figure 12.

Transition models of this kind can improve the prediction of how quickly two materials will become mixed across an unstable interface.  This is the first of a new class of models that will allow LANL simulations with strong transition effects to become more predictive.  Using high-resolution simulations for validation, future iterations of the model will continue to improve our predictive capability.  (LA-UR-24-21782)

Figure 12: Comparison of new models for the critical pressure terms with exact solution. Existing turbulence models for the pressure-strain show qualitatively incorrect behavior when used in transition (left, red line) and neglect pressure-diffusion entirely (right, no red line).

SNL developed reduced order models of long-duration, wear-based failure modes that are 1000x faster than traditional finite element analysis.  The improved performance enables engineers to rely on simulation-based engineering to inform design decisions.

Nonlinear reduced order models have been developed and utilized to predict the long-duration vibration response of critical safety mechanisms when exposed to normal random vibration environments.  Traditional finite element analysis (FEA) using state-of-the-art high-performance computing resources can simulate components with complex physics such as contact interactions between piece-parts, but it requires thousands of processors for many days to simulate milliseconds of response.  Reduced order modeling approaches preserve the fidelity of these models and simulate the response in a fraction of the time, enabling a new computational capability to assess a component’s performance under long-duration environments and mitigate the risk of wear-based failure modes.  A recent analysis has shown significant computational speedups, with the wall time of the reduced order models running nearly 1000x faster than the FEA counterpart, all on a single laptop processor.  As a result, the model was able to simulate 5 seconds of simulated response which would have been practically impossible to achieve with FEA, thus maturation of the nonlinear reduced order modeling technology enables engineers to rely on simulation-based engineering to inform design decisions and predict vibration performance when exposed to long-sustained mechanical environments.  (SAND2024-00738M)

Figure 13: Schematic of a finite element mesh of a ratcheting mechanism with many nonlinear contact interactions between bearings, gears, and pawls.

WELCOME ABOARD...

LLNL ASC program

Jamie Bramwell

Jamie Bramwell has been selected as the Deputy Associate Program Director in the Weapon Simulation and Computing Computational Physics Program (WSC/CP).  In this role, Jamie will work closely with the WSC/CP senior management team to develop and execute a strategy for delivering simulation tools in support of national security applications for NNSA.  Jamie has held many roles during her career at LLNL, including developer for the ALE3D multiphysics simulation code, project lead for the Smith engineering simulation code framework, and chair of the Tri-Lab Sponsor Team (TST) for the University of Colorado at Boulder PSAAP III research center.  She is also Director of the Center for Computational Engineering at LLNL.  Jamie received her B.S. in Mechanical Engineering from Northwestern University and her Ph.D. in Computational Science, Engineering, and Mathematics from University of Texas in Austin.
 

Gabe Rockefeller

LANL ASC program

Gabe Rockefeller is the new LANL acting CSSE Program Manager.  As a research software engineer, he previously helped manage LANL's Common Modeling Framework under ASC V&V and the Lagrangian Applications Project under ASC Integrated Codes (IC), where he championed DevOps for scientific software.  As an astrophysicist, he studied the collapse of massive stars and the environment around the black hole at the center of the Milky Way.  He also serves as a member of the TST for the PSAAP III Center for Exascale-enabled Scramjet Design at University of Illinois Urbana-Champaign (UIUC).

Daniel O'Malley

Daniel O’Malley is a scientist at LANL, where he is leading an ASC CSSE project on Large Language Models for Code and the Tri-lab Investigation of Large Language Models milestone.  He earned a B.S. in computer science and mathematics, an M.S. in mathematics, and a Ph.D. in applied mathematics from Purdue University.  His work has been recognized with a LANL Early Career Research award, a LANL Director's Postdoctoral Fellowship, the InterPore Award for Young Researchers, and a Purdue University Chappelle Fellowship.  He lives in Los Alamos, NM with his wife and two children and briefly appeared in the film Oppenheimer.

Quincy Wofford

Quincy Wofford is the new lead of the LANL DevOps Project under the ASC IC program.  Quincy received a BS in Interdisciplinary Computing – Physics from the University of Kansas.  Quincy joined LANL in 2017 as a Post-Bacc and then completed an MS in Computer Science at the University of New Mexico as a National Physical Science Consortium Fellow before returning to LANL as a staff scientist in 2020.  Quincy has been part of the cross-cutting DevOps team since the project began in 2021, and he has been involved in development workflows for LAP, Puma, and CMF.  He enjoys beekeeping and competing in Olympic-distance triathlons.

 


NNSA LDRD/SDRD Quarterly Highlights

LLNL SDRD: Revolutionary tool speeds up graphics processing unit (GPU) programs for scientific applications.

Figure 15: LLNL computer scientist Konstantinos Parasyris presents his team’s paper on the Record-Replay technique at SC’23.

A LLNL-led team has developed a method for optimizing application performance on large-scale GPU systems, providing a useful tool for developers running on GPU-based massively parallel and distributed machines.  A recent paper, which features four LLNL co-authors, describes a mechanism called Record-Replay (RR), which speeds up applications on GPUs by recording how a program runs on a GPU, and then replaying that recording to test different settings and finding the fastest way to run the program.  The paper was a finalist for the Best Paper award at the 2023 International Conference for High Performance Computing, Networking, Storage and Analysis (SC’23).  

“We developed a tool that automatically picks up part of the application, moves it as an independent piece so you can start independently and then optimize it,” said lead author and LLNL computer scientist Konstantinos Parasyris.  “Once it is optimized, you can plug it into the original application.  By doing so you can reduce the execution time of the entire application and do science faster.”  In the paper, the authors describe how RR can be used to improve the performance of OpenMP GPU applications.  Konstantinos Parasyris said the mechanism helps “autotune” large offload applications, thus overcoming a major bottleneck for speed in scientific applications.  (For more information, see the full LLNL article)
 


SNL LDRD: New color-changing materials could safeguard nuclear containers.

Figure 16: SNL postdoctoral researcher Stephanie White holds up a prototype tamper-indicating device.

The International Atomic Energy Agency (IAEA) relies on tamper-indicating devices to indicate if containers of nuclear material have been opened.  Now, SNL has developed a groundbreaking prototype using “bruising” materials that were created through LDRD research.  This innovation doesn’t just detect tampering; the new device boldly displays the evidence and could be used for international nuclear safeguards.  Using commercially available colored water beads, a color-changing chemical reaction, and 3D-printed cases, the team made puck-shaped devices that turn dark brown when damaged or the wire loop threaded through them is pulled out.  The important part of the color-changing solution is a chemical called L-DOPA, which the body uses to make several vital neurotransmitters.  This chemical can react with oxygen to make melanin, the brown chemical that gives human skin, hair, and eyes their color.

The SNL prototype devices are about the size of a stack of seven U.S. half-dollar coins, the same size as the metal cup seals the agency has used since the 1960s.  The IAEA relies on tamper-indicating devices looped around the openings of cabinets containing vital monitoring equipment.  The devices also go on the openings of containers of spent nuclear fuel to make possible diversion obvious.  The team now is testing dozens of pucks under numerous different conditions that mimic various environments they could be used in.  (See the full SNL article)


LANL LDRD: Harnessing light-powered nanoscale electrical currents to propel emerging technologies.

With integrated circuits offering diminishing returns in terms of speed and adaptability, LANL is developing nanometer-scale light-based systems that could deliver breakthroughs for ultrafast microelectronics, room-temperature infrared detection (for example, night vision), and a wide variety of technological applications.  As described in an article published in Nature, the LANL research team designed and fabricated asymmetric, nano-sized gold structures on an atomically thin layer of graphene.  The gold structures are dubbed “nanoantennas” based on the way they capture and focus light waves, forming optical “hot spots” that excite the electrons within the graphene.  

The conceptual demonstration in these optoelectronic metasurfaces have a number of promising applications.  The generated charge current can be naturally utilized as the signal for photodetection, particularly important at the long wavelength infrared region.  The system can serve as a source of terahertz radiation, useful in a range of applications from ultra-high-speed wireless communications to spectroscopy characterization of materials.  The system could also offer new opportunities for controlling nanomagnetism in which the specialized currents may be designed for adaptable, nano-scale magnetic fields.  The new capability may also prove important for ultrafast information processing, including computation and microelectronics.  (For more information, see the LANL article)


 

 

 

SNL LDRD partners with Alabama A&M University (AAMU) to open AI cage.

Figure 17: START HBCU partners, AAMU faculty and students cut the ribbon to new AI cage in November.

On the Alabama A&M University campus lies a state-of-the-art facility that allows faculty and students to conduct groundbreaking research and analyze the intelligence of artificial entities.  On November 28th, AAMU and SNL cut the ribbon on the university’s new AI cage to conduct drone and robotic AI research.  This project is the latest in a partnership with the historically Black university.  The Labs began working with talent at the university in 2022 through SNL’s Securing Top Academic Research and Talent at Historically Black Colleges and Universities program, known as START HBCU.  Steve Gianoulakis, senior manager of the autonomy and unmanned systems department, serves as the deputy campus executive on behalf of SNL for AAMU.  “AAMU has built an AI cage as a tool to help with the development of autonomous algorithms that are based on AI for the control and application of small, uncrewed systems.  The cage was based on a similar model at SNL.  The SNL unmanned aircraft systems team provided AAMU the full design package to facilitate their recreating the system,” Gianoulakis said.  “The AI cage will help the development of more effective ways to steer and use drones to perform a variety of tasks, like help develop technologies to defend against drones.  Drones have become the largest threats to sensitive facilities.”  The facility allows for students to address real-world challenges and develop adaptable, automated drone systems and vehicles.  (See the full SNL article for more information)
 


Questions? Comments? Contact Us.

NA-114 Office Director: Thuc Hoang, 202-586-7050

  • Integrated Codes: Jim Peltz, 202-586-7564
  • Physics and Engineering Models/LDRD/SDRD: Anthony Lewis, 202-287-6367
  • Verification and Validation/PSAAP/CSGF: David Etim, 202-586-8081
  • Computational Systems and Software Environment: Si Hammond, 202-586-5748
  • Facility Operations and User Support: K. Mike Lang, 301-903-0240

Advanced Simulation and Computing