OPINION
November/December 2001

Dawn of a New Era in Computational Global Seismology

Global seismology is a science driven by observations, and most major advances in our discipline can be linked to advances in the quality and quantity of seismic instrumentation. Normal-mode seismology was born in the early Sixties, when Caltech and UCLA seismologists identified fundamental toroidal and spheroidal modes in spectra obtained from time series recorded after the great 22 May 1960 Chilean earthquake. In the Eighties, the Harvard centroid-moment tensor project was initiated, and global tomography became feasible as a result of high-quality data recorded by the International Deployment of Accelerometers (IDA) and the Global Digital Seismographic Network (GDSN). Today, the Global Seismographic Network (GSN) contains more than 120 stations, the first permanent ocean-bottom seismometer (H2O) has been installed, and in the near future USArray and the Advanced National Seismic System (ANSS) may blanket the contiguous United States with hundreds of broadband instruments. As in the past, these advances in instrumentation must be complemented by advances in theory and computation.

Because the Earth is well approximated by a set of concentric spherical shells, synthetic seismograms based upon normal-mode summation or discrete-wavenumber/reflectivity methods are remarkably accurate. Spherically symmetric Earth models, such as PREM and IASPEI91, generally predict the arrival times of seismic waves to within seconds, which reflects the fact that lateral variations in wave speed in the mantle are relatively small. In normal-mode seismology, Rayleigh's principle has served us well, and first-order perturbation theory can accurately account for the effects of rotation, ellipticity, and lateral heterogeneity on free-oscillation spectra. In some cases the theory is still well ahead of the observations. For example, the theoretical effects of general anisotropy and nonhydrostatic prestress on normal-mode spectra can be accounted for but have not yet been unraveled from observations. In travel-time tomography, we can rely heavily on Fermat's principle, which has led to spectacular pictures of the large-scale structure of the Earth and detailed images of subduction zones.

Because of the success of simple spherically symmetric Earth models, first-order perturbation theory, and ray theory, global seismologists have been reluctant to embrace fully numerical approaches. At this time, however, there is a pressing need for more accurate forward modeling techniques. Lateral variations in shear-wave speed exceed the five percent level in the upper- and lowermost mantle, ultralow velocity zones at the core-mantle boundary have been reported, lateral variations in short-period surface-wave phase speed can be in excess of ten percent, and anisotropy is invoked to explain shear-wave splitting and the directional dependence of oceanic Rayleigh-wave and continental Pn propagation. The crust exhibits strong lateral variations and varies in thickness by an order of magnitude from about seven kilometers underneath the oceans to more than seventy kilometers underneath Tibet. Clearly, linear perturbation theory is inappropriate in this context, and yet all "crustal corrections" rely on the validity of this approach. We are barely able to simulate wave propagation in general anisotropic models, and inversions for anisotropic Earth structure are in their infancy.

When first presented with problems in global seismology, computational fluid dynamicists tend to scoff and snicker. How can solving a linear wave equation pose any serious challenges? Of course the answer is manifold. One challenge lies in the boundary conditions: One needs to resolve a slow thin crust with highly variable thickness. Because wave speed generally increases with depth, one needs a mesh or grid that is fine near the surface and becomes more coarse with depth. Another challenge is posed by the fact that dispersion and anisotropy are important in global seismology, and thus the mesh or grid should not introduce significant numerical dispersion or anisotropy. Attenuation also plays an important role in global seismology, which requires the introduction of memory into the system. This is numerically expensive. There are sharp fluid-solid discontinuities at the inner-core and core-mantle boundaries and at the ocean floor, which need to be accommodated. Finally, at long periods, the effects of self-gravitation and rotation become relevant.

The dawn of a new era in computational global seismology is precipitated by two recent developments. First, major advances in computer hardware have facilitated simulations of seismic wave propagation in fully three-dimensional Earth models at all periods of interest to long-period global seismologists. In particular, advances in the PC industry and readily available open-source system software have enabled the construction of powerful PC clusters, so-called Beowulf machines, which can have hundreds of processors, tens of gigabytes of memory, and terabytes of disk space for the cost of a traditional shared-memory server. These machines employ commodity off-the-shelve hardware and typically use the open-source operating system Linux, the open-source message-passing software MPI, and a standard 100 Mbits/second Fast Ethernet.

Second, by taking advantage of new numerical algorithms developed in computational fluid dynamics, all the numerical challenges posed by the globe can be met. For example, the spectral-element method, which combines the accuracy of a spectral or pseudospectral technique with the flexibility of a finite-element method, can accurately simulate global wave propagation. The finite-element aspect of this method lends itself very well to parallel computations and makes the implementation of the boundary conditions straightforward, and its spectral aspect makes the calculations sufficiently accurate.

The future for computational science and engineering in general, and computational global seismology in particular, looks very bright. The combined advances in microprocessor technology, memory density, and network speed will continue to double the capacity of parallel computers roughly every two years well into the foreseeable future, in accordance with Moore's law. As a result of these hardware advances, parallel computers reaching 1,000 teraflops (1 petaflop) are expected to become available in the next ten years. Thus, simulations of global seismic wave propagation that currently take tens of hours on state-of-the-art Beowulfs will run in a matter of seconds, which will open the door to source and tomographic inversions based entirely upon numerical methods. Imagine that.

Jeroen Tromp
Pasadena, California
May 2001


To send a letter to the editor regarding this opinion or to write your own opinion, contact the SRL editor by e-mail.

Posted: 1 February 2002