Understanding turbulence and mix in compressible flows is of fundamental importance to real-world applications such as chemical combustion and supernova evolution. The ability to run in three dimensions and at very high resolution is required for the simulation to accurately represent the interaction of the various length scales, and consequently, the reactivity of the intermixin species. Toward this end, we have carried out a very high resolution (over 8 billion zones) 3-D simulation of the Richtmyer-Meshkov instability and turbulent mixing on the IBM Sustained Stewardship TeraOp (SST) system, developed under the auspices of the Department of Energy (DOE) Accelerated Strategic Computing Initiative (ASCI) and located at Lawrence Livermore National Laboratory. We have also undertaken an even higher resolution proof-of-principle calculation (over 24 billion zones) on 5832 processors of the IBM system, which executed for over an hour at a sustained rate of 1.05 Tflop/s, as well as a short calculation with a modified algorithm that achieved a sustained rate of 1.18Tflop/s. The full production scientific simulation, using a further modified algorithm, ran for 27,000 timesteps in slightly over a week of wall time using 3840 processors of the IBM system, clockin a sustained throughput of roughly 0.6 teraflop per second (32-bit arithmetic). Nearly 300,000 graphics files comprising over three terabytes of data were produced and post-processed. The capability of running in 3-D at high resolution enabled us to get a more accurate and detailed picture of the fluid-flow structure - in particular, to simulate the development of fine scale structures from the interactions of long-and short-wavelength phenomena, to elucidate differences between two-dimensional and three-dimensional turbulence, to explore a conjecture regarding the transition from unstable flow to fully developed turbulence with increasing Reynolds number, and to ascertain convergence of the computed solution with respect to mesh resolution.
|Original language||English (US)|
|Title of host publication||ACM/IEEE SC 1999 Conference, SC 1999|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||1|
|ISBN (Electronic)||1581130910, 9781581130911|
|State||Published - 1999|
|Event||1999 ACM/IEEE Conference on Supercomputing, SC 1999 - Portland, United States|
Duration: Nov 13 1999 → Nov 19 1999
|Name||ACM/IEEE SC 1999 Conference, SC 1999|
|Other||1999 ACM/IEEE Conference on Supercomputing, SC 1999|
|Period||11/13/99 → 11/19/99|
Bibliographical noteFunding Information:
This is LLNL Report UCRL-JC-134237. We acknowledge Terry Heidelberg, Charles Athey, David Fox, James Garlick and Robin Goldstone of LLNL, and David Moffatt and Paul Herb of IBM for their assistance in system administration matters pertaining to the scientific simulation reported here. We acknowledge Roch Archambault of IBM for implementing sPPM-related compiler optimizations, Catherine Crawford of IBM for trouble-shooting last minute problems in the proof-of-principle calculations, and Andrew Wack of IBM for testing, debugging and support for the demonstration runs. We acknowledge the ASCI program both for its support of the scientific research and for providing the necessary computational resources. This work was performed under the auspices of the U.S.D.O.E. by Lawrence Livermore National Laboratory under contract No. W-7405-ENG-48. The University of Minnesota team acknowledges support from the DOE ASCI program, through contracts from both LLNL and LANL, from the DOE Office of Science through grants DE-FG02-87ER25035 and DE-FG02-94ER25207, from the NSF PACI program through subcontracts from NCSA, and from the University of Minnesota's Supercomputing Institute.
The University of Minnesota team acknowledges support from the DOE ASCI program, through contracts from both LLNL and LANL, from the DOE Office of Science through grants DE-FG02-87ER25035 and DE-FG02-94ER25207, from the NSF PACI program through subcontracts from NCSA, and from the University of Minnesota’s Supercomputing Institute.
The sPPM code solves the compressible Euler equations using a simplified implementation of the Piecewise Parabolic Method (PPM), which is a high-order accurate Godunov method developed by Colella and Woodward . In a study that helped drive the development of the PPM scheme, Woodward and Colella  compared PPM to several other difference methods for problems involving strong shocks. Use of the Godunov approach in PPM makes this numerical scheme upstream-centered in each Riemann invariant separately. Together with nonlinear solutions of Riemann’s shock tube problem at grid cell interfaces when strong waves are present, this upstream centering produces sharp numerical representations of shocks on the computational grid. Monotonicity constraints inspired by the work of van Leer with the MUSCL scheme , together with a numerical diffusion term that adapts to the conditions of the local flow, keep the sharp shock fronts of the PPM scheme from generating unwanted and unphysical noise in the solution. PPM also includes an interpolation scheme that is fourth order accurate for small timesteps. This interpolation scheme detects contact discontinuities in the flow solution and, when they are present, it employs an alternative interpolation technique that helps to keep the numerical representation of these discontinuities sharp. A library of PPM code modules, suitable for use in parallel computation, is being made available by the Laboratory for Computational Science and Engineering at the University of Minnesota under support from the DOE Office of Science .
We acknowledge the ASCI program both for its support of the scientific research and for providing the necessary computational resources. This work was performed under the auspices of the U.S.D.O.E. by Lawrence Livermore National Laboratory under contract No. W-7405-ENG-48.
© 1999 IEEE.