OPINION

September/October 2010

CTBT Verifiability and the SSA Position Statement

doi:10.1785/gssrl.81.5.685

At some point, President Obama may ask the Senate to vote on ratification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Despite continuing advances in seismological monitoring methods, CTBT opponents continue to assert—in opinion columns in the Wall Street Journal and elsewhere— that there are significant doubts about “verifiability.” To contribute to good use of their science by policy makers, seismologists could write an opinion column for a newspaper in their own cities or states, or send letters to their senators, perhaps making some of the points in the letter that I have written to my senators.

Dear Senator,

President Obama has promised to reach out to the Senate to secure the ratification of the Comprehensive Nuclear-Test-Ban Treaty. As a seismologist, I would like to volunteer an informed opinion on the United States’ ability to verify worldwide compliance with a comprehensive test ban. In particular, I want to explain why I endorse the position statement of the Seismological Society of America on our capability to monitor the treaty, which states “no nation could rely upon successfully concealing a program of nuclear testing, even at low yields.”

As a seismologist, I would like to volunteer an informed opinion on the United States’ ability to verify worldwide compliance with a comprehensive test ban. In particular, I want to explain why I endorse the position statement of the Seismological Society of America on our capability to monitor the treaty, which states “no nation could rely upon successfully concealing a program of nuclear testing, even at low yields.”

To begin with, any attempt at a covert nuclear test is virtually sure to be conducted underground. Nuclear explosions in the atmosphere emit characteristic electromagnetic signatures that can be detected easily by sensors on U.S. satellites. Unlike the situation decades ago, these satellites are now so numerous that several are overhead any point on Earth at all times. Underwater nuclear explosions emit acoustic waves that are hundreds of times “louder” than necessary to be transmitted across the world’s oceans with high fidelity. Questioning the detectability of an underwater nuclear explosion invites laughter or derision among hydroacoustic engineers who have designed systems that track ultra-quiet submarines.

Seismology—the study of vibration waves within the solid earth—has progressed to a state where conducting an undetected underground test also verges on impossible, at least for explosions with a yield of one kiloton or more. I would like to describe to you some of the progress in seismology in recent years, focusing on three distinct and important areas. First, high-capability sensors have been deployed ubiquitously around the world. Second, newly developed methods of data analysis can be calibrated for the geology of any region. Finally, computational capabilities are near to fulfilling long-sought goals in seismology.

The capability of a seismographic system is measured in several ways, including the “band” of frequencies that it can measure, the “sensitivity” to the smallest vibration amplitudes to which it responds, and the “range” from the smallest to the largest amplitudes that it can measure. As long as twenty years ago, the most critical and expensive seismographic stations in the world had already achieved the full set of desirable capabilities— systems at these few stations could record the entire frequency band and amplitude range of useful seismic signals. But in recent years, these systems have become so common that they are almost regarded as commodities. Commercial manufacturers sell modestly priced components that can easily be shipped around the world. On arrival, the components can be assembled and installed with now widely known techniques that have been refined during numerous deployments in diverse environments. It is becoming nearly routine even for modestly funded academic researchers to deploy networks of dozens of stations that operate reliably in remote sites, far from technical support and under harsh conditions. Challenges posed by the heights of Tibet and the Andes mountains, by the tropics of Indonesia, Central America, and Africa, and even by Antarctic winters have been met with carefully selected commercial system components and modest adaptations. Today, the only continental regions that are not well-monitored by high capability systems are those where low income and low risk of earthquakes combine to limit investment in seismology or where political instability and pervasive crime preclude undertaking almost any sort of technologically advanced activity.

Sustaining our verification ability will depend on continued investment to maintain and operate seismic networks and other monitoring technologies, to educate new generations of seismologists and other scientists, and even to conduct further research to test and improve today’s cutting edge techniques

Taking advantage of these high quality data and of the ongoing rapid growth in computing power, seismologists have developed new analysis methods that would have seemed impractical, or at least problematic, not so long ago. In the 20th century, analysis of seismic data concentrated on identifying and measuring the waves that traveled most directly from a source—such as an earthquake or an explosion—to a seismometer. The goal was to minimize the effect of waves that had been reflected or scattered en route. Today that approach is often reversed. When scattered waves are included, waves radiated in many directions from a source are recorded by a single seismometer. Now that scattering is well characterized, moreinclusive data processing can be used to evaluate the size and other properties of a source much more reliably than extrapolation from waves radiated in only one direction. Waves scattered near a source by a unique distribution of irregularities in the surrounding rocks create a seismic fingerprint that makes it possible now to identify “repeating earthquakes” that occur when exactly the same spot on a fault slips repeatedly. Such seismic fingerprints are also used to reliably sort explosions from different mines— and to spot explosions that occur away from known mining sites or deeper than typical mining explosions. Seismologists even extract information from waves scattered near the end of their paths, just below the seismometers—in this case to make 3-D images of the Earth’s interior much as medical scans image the interior of our own bodies.

Images of the Earth’s interior are not only of academic interest; they are required to accurately simulate wave propagation. Mathematical simulation has been an indispensable tool of seismology since the late 19th century, and has become progressively more effective as computers and Earth images have both improved. The earliest mysteries about seismograms were resolved by assuming radically simplified Earth images to facilitate manipulating wave equations with pencil and paper. The results were good enough for many applications that relied on analysis of slow vibrations with periods of one second or longer. They were used, for example, to understand how the differences between “magnitudes” measured on different scales can be used to distinguish a multi-kiloton nuclear explosion from significant earthquakes. But verifying a test ban requires monitoring for smaller explosions, which must be identified among smaller earthquakes. Since explosions and small earthquakes both generate primarily high-frequency waves, mathematical simulation of high frequencies and the influence of small-scale structure is required to give us full confidence in newer identification methods—which means that the old pencil-andpaper manipulations no longer suffice. Seismologists have been using “numerical” simulations for decades, but always pushing against the limits of computer capabilities. Remarkably, computers are on the verge of meeting the most demanding needs of seismologists working to improve nuclear explosion monitoring. Modern numerical methods have been tested and work well together to simulate generation of high-frequency waves and propagation for thousands of miles to seismic stations. Achieving the best resolution at the furthest distances requires modeling the Earth as smaller, more numerous virtual elements, and thus more powerful computers. U.S. national laboratories already have such computers, and in just a few years so will many universities.

Sustaining our verification ability will depend on continued investment to maintain and operate seismic networks and other monitoring technologies, to educate new generations of seismologists and other scientists, and even to conduct further research to test and improve today’s cutting edge techniques. In addition, I realize that you will consult with other technical experts about maintainability of the U.S. stockpile of nuclear weapons without testing and the military significance of tests much smaller than one kiloton, which might not be clearly identified under some circumstances. With seismology as the most important technology for monitoring underground nuclear tests, however, I am confident that the United States can verify compliance to a low threshold and with a level of assurance that is already high and can grow even better.   

Raymond J. Willemann IRIS Consortium ray [at] iris [dot] edu

 


To send a letter to the editor regarding this opinion or to write your own opinion, you may contact the SRL editor by sending e-mail to
<lastiz [at] ucsd [dot] edu>.



[Back]

 

Posted: 30 August 2010