Researchers from different backgrounds met to share information about large‐scale numerical simulations in mantle convection, algorithmic developments, scientific visualization, data‐processing, and seismic wave propagation at a special session devoted to high‐performance computing in geosciences at the 1994 AGU Spring Meeting. The session sparked discussions between diverse segments of the community and is very timely in view of the high‐performance computing activities recently sponsored by the National Science Foundation, Department of Energy, and NASA. As S. Balachandar pointed out, there are three main issues for high‐performance computing: efficient and accurate computations, data compression and storage, and interpretation and post‐processing of large data sets. The first issue naturally comes to mind when the phrase “high‐performance computing” is heard, but data compression and postprocessing present equally important challenges. Storing output from a three‐dimensional computation that has 256 nodes along each side with four variables at each node—be it for mantle convection, ocean or atmospheric circulation, or seismic wave modeling—for 5000 time steps requires 2.6 Terrabytes of storage. While computers and algorithms to consider such a grid may exist, storing the results of such a computation are not feasible given the current networking and disk storage infrastructure.