The Mission of the Advanced Simulation and Computing Program (ASC)
On October 2, 1992, President Bush signed into law the FY1993 Energy and Water Authorization Bill that established a moratorium on U.S. nuclear testing. President Clinton extended the moratorium on July 3, 1993. These decisions ushered in a new era by which the U.S. ensures confidence in the safety, performance, and reliability of its nuclear stockpile.
The U.S. also decided to halt new nuclear weapon production. This decision meant that the nation's stockpile of nuclear weapons would need to be maintained far beyond its original design lifetime.
The United States Department of Energy/National Nuclear Security Administration (NNSA) oversees our nation's Stockpile Stewardship effort. Without underground testing, we need computer simulations to make sure our nuclear weapons stockpile is safe, reliable, and operational. NNSA's supercomputers, such as ASC White at Lawrence Livermore National Laboratory (LLNL), will compute the factors involved in a nuclear detonationincluding a weapon's age and designand eventually allow the NNSA to manage its entire stockpile of nuclear weapons without any real nuclear tests.
To implement these pivotal policy decisions, the Stockpile Stewardship Program was established. The goal of this program is to provide scientists and engineers with the technical capabilities to maintain a credible nuclear deterrent without the use of the two key tools used to do that job over the past 50 years: (1) underground nuclear testing and (2) modernization through development of new weapon systems (see Fig. 1).
One Program/Three Laboratories
The computational problems that the Advanced Simulation and Computing (ASC) program will solve for the science-based Stockpile Stewardship Program come from the activities and responsibilities of the three Defense Programs laboratoriesLLNL, Los Alamos National Laboratory (LANL), and Sandia National Laboratories (SNL). Advanced computational simulations using ASC's 3D modeling capability and the latest visualization technologies are vital components of Stockpile Stewardship. Together, these unprecedented achievements in hardware and software technology make possible a much clearer understanding of the issues involved in supporting the nation's nuclear weapon stockpile and the scientific judgments required to fulfill our Stockpile Stewardship responsibilities.
In addition, the capabilities developed through this terascale computing partnership between government and industry will provide a commercial platform for medical simulations, genetic computing, global climate modeling, aerospace and automotive design, financial models, and other domestic applications.
Historically, U.S. policymakers were ensured confidence in the stockpile by the use of regular nuclear tests. They never had to rely on weapon systems that had exceeded their design lifetimes because older weapons were routinely replaced with new designs. With the cessation of these two practices, the U.S. committed itself to maintaining its existing weapon systems indefinitely, well beyond their intended lifetimes. Implementing this policy with credibility requires new scientific tools. The responsibility to develop those tools resides with the Stockpile Stewardship Program.
To meet this challenge, a new set of aboveground, nonnuclear experimental capabilities was required, environmentally benign fabrication capabilities were needed, and archived data from decades of nuclear tests had to be made available to weapon scientists and engineers. An unprecedented level of computational capability was needed to serve as the integrating force to make effective use of the collective scientific understanding (see Fig. 2). This reality meant that a new and powerful role for modeling and simulation was required. The Advanced Simulation and Computing Program (formerly known as the Accelerated Strategic Computing Initiative, or ASCI) was established to create this capability.
Test-Based Stockpile Stewardship
The U.S. has designed and maintained a stockpile of nuclear weapons for more than 50 years. Over that time, the U.S. government, through its national laboratories and production facilities, developed approaches to maintaining confidence in the performance, safety, and reliability of nuclear weapons. These approaches, both nuclear and nonnuclear, were generally test-centric. Scientists and engineers would apply the most complete physics understanding possible to designs or questions about the stockpile. Many times this would result in extensive mathematical predictions of a weapon's performance. As computer power increased, these predictions were incorporated into computer programs, which provided a higher level of information to weapon experts.
Because the physics understanding of the weapons was not complete and the existing computing systems were not up to the task, many empirically derived factors were incorporated into the computer codes to improve their fit to the test data. This led to a strong interdependence between computational simulation and testing. In the early days of the weapons program, the national laboratories consistently purchased the highest-performance supercomputers in the world. These supercomputers were needed to improve the designs of nuclear weapons, making the weapons smaller and lighter, while improving safety and reliability (see Fig. 3).
The computational power of these early supercomputer systems was limited; codes, therefore, continued to be one or two dimensional, requiring the use of many empirical factors in predicting weapon behavior. The computing systems also limited the level of geometric and physics detail that could be used. That was sufficient in an era where extensive testing was conducted. The computer codes would predict the test results, and then the test results would be used to make specific calibrations to the codes for each weapon. In this situation, code limitations were mitigated by the use of tests, which could serve as the final integrating factor.
A wide array of aboveground test facilities and laboratory-scale experiments supported this work. The decision to pursue a stockpile stewardship program without nuclear tests has now stimulated a change in the mix of scientific capabilities required to maintain the nuclear deterrent. While ASC is intended to replace test-centric approaches with computation-centric approaches, this does not imply that nonnuclear experimental data will cease to be important to weapon scientists and engineers. Aboveground and laboratory-scale experiments will continue to be conducted in the Campaigns, as permitted by laboratory budgets. In fact, ASC anticipates an increase in these types of tests. Experimental facilities, such as hydrodynamic testing facilities, pulsed-power accelerators, and the National Ignition Facility (NIF), will produce data that will increase in importance as weapon scientists and engineers begin the essential process of validating and verifying the physics models in the codes. Subsequently, these codes will predict the behavior of weapons in the enduring stockpile.
Realizing the Vision
Established in 1995 as a critical element of the Stockpile Stewardship Program, ASC is developing the computational capabilities to allow a smooth transition from nuclear test-based certification to science- and simulation-based certification. ASC is a focused and balanced program that is accelerating the development of simulation capabilities needed to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionalityfar exceeding what might have been achieved in the absence of a focused initiative. These capabilities will be used directly in support of the Directed Stockpile Work (DSW) Program.
To realize its vision, ASC is creating simulation capabilities using advanced weapon codes and high-performance computing that incorporate more complete scientific models based on experimental results from the Campaigns, past tests, and theory. The expected outcomes will be predictive simulations that enable assessment and certification of the safety, performance, and reliability of nuclear weapon systems. These simulation capabilities will also help scientists understand weapons aging, predict when components will have to be replaced, and evaluate the implications of changes in materials and fabrication processes to the design life of the aging weapon systems (see Fig. 4). This science-based understanding is essential to ensure that changes brought about through aging or remanufacturing will not adversely affect the enduring stockpile.
To meet the needs of stockpile stewardship in the year 2005 and beyond, ASC is solving progressively more difficult problems as we move away from nuclear testing. Applications must achieve higher resolution, higher fidelity, three-dimensional physics, and full-system modeling capabilities to reduce reliance on empirical judgments. This level of simulation requires high-performance computing far beyond our current level of performance. Therefore, ASC collaborates with industry to accelerate development of more powerful computing hardware and invests in creating the necessary software environment. A powerful problem-solving environment must be established to support application development and enable efficient and productive use of the new computing systems. By 2005, ASC is responsible for the following deliverables:
ASC recognizes that the creation of simulation capabilities needed for performance simulation and virtual prototyping is a significant challenge. This challenge requires the science and technology resources available at the national laboratories, and it will require close cooperation with the computer industry to accelerate their business plans to provide the computational platforms needed to support ASC applications. Universities will also play an important role in developing new computational approaches and increasing scientific understanding needed for this unprecedented level of simulation.
The creation of sophisticated simulation capabilities also supports our need to maintain readiness to resume nuclear testing. The capability provided by ASC significantly enhances our ability to design and understand tests in far greater detail than has been possible in the past. Consequently, we could pursue new testing strategies, thus obtaining more useful information from each test.
ASC Collaborations Place Four Computers in the Top 10
At the Supercomputing 2001 (SC01) conference in Denver, the Advanced Simulation and Computing Program's "ASC White" retained the top place on the Top500 list of the world's most powerful and fastest computers. LLNL ASC Program Leader David Nowak called ASC White "a very large team effort. This is the first truly Tri-lab machine benefiting Stockpile Stewardship research at the Lawrence Livermore, Los Alamos, and Sandia national laboratories."
The Tri-lab ASC sites also host three other supercomputers in the first ten on the Top500 list: Sandia's ASC Red (4), LLNL's Blue-Pacific (5), and Los Alamos' Blue-Mountain (8), as shown in the table below. Pittsburgh Supercomputing Center's 6.5-TF Compaq AlphaServer, and NERSC's 4.9-TF IBM SP Power3 system at the Lawrence Berkeley National Laboratory placed second and third, respectively (see Fig. 6).
The "Top500 Supercomputer Site" list is maintained by Hans Meuer researchers, the University of Mannheim in Germany, and the University of Tennessee. Their goal is "to provide a better basis for statistics on high-performance computers." The Top500 Site uses the best Linpack benchmark performance achieved as the measure in ranking the computers. The TOP500 list has been updated twice a year since June 1993. For a complete list of the Top500 supercomputers, visit their website at http://www.top500.org/.
The Essential ASC Drivers
To crystallize planning for ASC to support the Stockpile Stewardship Program, several essential drivers have emerged. The primary driver is that, with the cessation of underground nuclear testing in 1992, the experiment-based experience and expertise of the program is declining because of the inevitable retirements of test-experienced weapons experts and the increasing length of time since the last nuclear test. This driver imposes a crucial target period of 2004 to 2010 for having usable ASC supercomputer systems and codes available for a smooth transition from "test-based" certification and assessment. A second key driver is the need for full, three-dimensional simulation codes, which incorporate the complex physics required to model physical phenomena such as weapon performance, aging, and accident simulation. An additional important driver is the computer system speed and software required for effective use of this three-dimensional, high-fidelity-physics and engineering modeling.
Major ASC Objectives
The program has at its core the overarching objective to meet the science and simulation requirements of the Stockpile Stewardship Program. That relationship is described in more detail in "The Role of ASC in Stockpile Stewardship."
By 2005, ASC plans to have three-dimensional working simulation codes with a 100-teraOPS-computer system to facilitate the transition to full operational capability by 2010 (see Fig. 7).
In addition, ASC will demonstrate engineering simulations of the weapon response to the full stockpile-to-target-sequence (STS) environment by 2005. Along with the capacity to store very large data sets, the program must develop the capability to transfer high volumes of data at high speeds and to provide the scientific visualization of the results of ASC calculations to weapon scientists and engineers. Achievement of these milestones will require the integrated success of the program elements described in this document.
As part of our planning, intermediate milestones have been established. ASC has made remarkable progress in its first five years, and all major milestones for this period have been met. A brief list of accomplishments follows:
ASC Program Structure
In response to the drivers and to achieve its objectives, ASC is organized around major program elements and the strategies employed within those elements. As the program has matured, the program elements have been restructured to reflect the changes in the challenges we face. The result is the following list of integrated program elements organized into the categories shown in Fig. 8.
ASC is a large, complex, multifaceted, and highly integrated R&D effort. Managing such an effort, planning and implementing interrelated milestones while pursuing new developments in simulation science and technology, is a great challenge. Our approach to that challenge involves the coordinated use of multiple management structuresthe staff at DP Headquarters, the Tri-Lab Executive committee, and technical teams staffed from the three national laboratories.