Electronic Seismologist Logo
ELECTRONIC SEISMOLOGIST
March/April 1999

Steve Malone
E-mail: steve@geophys.washington.edu
Geophyics AK-50
University of Washington
Seattle, WA 98195
Phone: (206) 685-3811
Fax: (206) 543-0489

SEISMIC NETWORK RECORDING AND PROCESSING SYSTEMS I
(Where are we and how did we get here?)

The Electronic Seismologist (ES) has been involved with the specification, modification, construction, adaptation, evaluation, and even use of seismic network recording and processing software since the early days of real-time digital seismology, a topic of (or perhaps only of) great interest to other techno-weenies like the ES; but so be it. It is a timely topic, as a major new software product has come on the scene increasing interest in reviewing the "state of the art" of such systems. This new major player is the Antelope system being developed by Boulder Real Time Technologies, Inc. and marketed by Kinemetrics, Inc. A review of Antelope was the original intent of this column, but in starting to write it the ES realizes that it is only appropriate to place Antelope in its historical context as well as to contrast it with other current systems. A demonstration version of Antelope has been made available to many seismic network operators in the U.S. who are (one hopes) evaluating it as this column is being written with an aim to deciding on its adoption for use for their respective networks. The ES is currently trying to run it along with another software system called Earthworm on identical data streams for his evaluation. While it is too early to report results of this comparison here (stay tuned for the next ES column), some comments on these systems and others are appropriate.

The reader should bear in mind that the Electronic Seismologist is not totally unbiased in this review. If one refers to a previous column (Malone, 1998) one detects a certain prejudice for "open source" software, i.e., the bazaar model of development. Be that as it may, the ES tries to separate his review of the technical aspects of the systems under consideration from their development and marketing style.

DEFINING THE BEAST

What is the essence of a "Seismic Network Recording and Processing System?" It could be as simple as a computer system to record continuous, multichannel waveform data and to allow for the display and timing of any subset of it. On the other hand, it could, in addition, do any or all of the following: merge widely different forms of input data; do automatic event detection, phase picking, and location; provide for easy, fast manual review and updating of automatically detected events or other data; generate and distribute automatic notification of significant events; maintain catalogs, maps, and other information for publication and Web access; archive the trace data and reduced data in several ways (and pour beer and write student dissertations on the side).

Of course, each network operator will have his own list of requirements and priorities for his network computer system. Some operators may feel that the speed and reliability of notification of significant events is the top priority. For some it is the ease and efficiency of manually reviewing selected events. For others it is the ability to attach special real-time processing modules to data streams, and some may place the highest importance on simplicity and a minimum of effort to produce a basic catalog. Longevity and adaptability to new computer hardware and types of incoming data are important to most operators, since it is almost always easier to evolve to a new version of what one is used to rather than to start over with brand new hardware, operating system, and special software.

Most seismic network operators will say the cost of their systems is a major issue, and thus the software should be free and run on the cheapest hardware there is. Many still don't realize that the major cost of any computer system is the ongoing operational cost, not the hardware/software purchase price. The cost of their and their staff's time configuring, adjusting, adapting, connecting, diagnosing, debugging, and using the system can rapidly exceed the original purchase price of the components if they value their time at more than $1.98/hour. Computer hardware costing several thousand dollars more than some other will more than pay for itself if it proves much more robust and easy to use, thus saving significant personnel time. Similarly, commercial software which "runs out of the box" can be much cheaper in the long run than do-it-yourself or fly-by-night software, even with a high initial purchase price.

All of the above issues should be considered by seismic network operators when obtaining or changing their recording and processing system. But in the real world these are rarely evaluated in a systematic, nonemotional way. Too often where we have come from dictates where we are going.

A LITTLE HISTORICAL CONTEXT

The origin of the modern digital seismic network recording and processing system can probably be traced to the CEDAR system of the mid-1970's developed by Carl Johnson when he was a graduate student at Caltech. In Caltech's Seismolab Carl found a little-used Data General Nova computer on which he was allowed to develop a digitizing and event detector program to save to magnetic tape time slices of waveform data from the Southern California Seismic Net. An offline analysis system on a Data General Eclipse computer was developed to read these tapes and provide graphical analysis capability for timing arrivals. Carl's event-triggering system was a clever use of crude single-channel short-term averages over long-term averages (STA/LTA) combined to require coincidence over a subnet of stations within a time window appropriate for seismic velocities and station separation distances. Given the speed of computers then and the number of channels to process, great care was needed to eke out all available machine cycles and be load-independent. Besides writing and improving this software, Carl actually did a little research and graduated in time to start all over again writing similar software for DEC computers in the early 1980's, first on DEC-11/34's and then on VAXes. One version was implemented on the RSX-11 operating system with Alex Bittenbinder's help. Along with some analysis software running on UNIX Carl's system was spread to a number of university-operated regional seismic networks in the 1980's. Another version was implemented on the VMS operating system and became what is now called CUSP, which is still used today at most USGS-operated regional networks.

At about the same time that Carl Johnson was procrastinating on his Ph.D. research by writing software, Sam Stewart and Rex Allen were developing automatic real-time P-wave picking software at the Northern California Seismic Network center in Menlo Park (not to be confused with the single channel Murdock-Hutt phase picker primarily used on broadband signals for teleseismic recordings). Rather than only detect and trigger the saving of waveform data for later offline analysis, Steward and Allen wanted to detect and time individual P waves rapidly, accurately, and reliably enough to do rapid, automatic location and magnitude determinations. Training a computer to recognize the sometimes subtle differences of a real seismic P wave from a noise glitch required some clever code and many more machine cycles than an STA/LTA trigger. The result was that a single computer could do only a small subset of stations in larger seismic networks. To work on a large network of stations required the use of a fleet of specialized microcomputers programmed in their native assembler code, each working on a few channels. These individual computers were hooked together in a special version of a multiprocessor computer using a master microcomputer to associate the picks from individual microcomputers into events. Several of these "Rex-Allen-RTP" systems were built and used quite successfully from the early 1980's until very recently.

As digital computers became more powerful and cheaper during the 1980's they began taking over the recording duties at many seismic network centers. Derivatives of Carl Johnson's early systems were used in a variety of spin-offs. Some were simple extensions and enhancements made by seismologists and computer support people at the seismic recording centers. CUSP, in particular, evolved dramatically over the years but maintained much of its basic look and feel. Code from the early DEC-11 systems has been ported to a variety of other platforms and has extensively changed in character. In the commercial world Sierra Geophysics, Inc. developed a processing system to run on a Prime computer which had very nice, user-friendly analysis software. Another commercial system, developed by Newt, Inc., specialized in low-maintenance, real-time automatic triggering and recording. Some of these systems are still in use at small networks today, but, because the software companies no longer exist and the hardware are no longer being built, these systems are now orphans.

During the 1980's several different network groups experimented with different ways of seismic recording and processing, putting their own twists on the procedures or emphasizing certain characteristics. One which stood out from the others in some respects was the ANZA network operated by UCSD. This was probably the first regional network to be totally digital. Rather than using a multiplexed digitizer at the central site, each remote station was equipped with its own digitizer, and real-time communications with the central site was by digital telemetry. This technique provided much better fidelity of recorded data than analog telemetry; however, it did use more telemetry bandwidth and more expensive components and thus was not appropriate for large networks with limited funding. Nevertheless, this technology advanced and was used in a number of small to medium-sized specialized networks through the later 1980's and 1990's. Today the ANZA network still uses (a more modern version of) digital telemetry and the Antelope software as its processing system.

In the late 1980's the development of seismic data-acquisition software for the IBM personal computer (PC) by Willie Lee of the USGS made digital seismology affordable to many more groups than had previously been true. The publication of the software by IASPEI on distributable computer media made this simple but powerful system available all over the world. While this system is still being used at many small seismic networks and is particularly well suited to such applications as volcano monitoring in developing countries, as distributed it is not well suited to a large complex modern network. However, parts of it have been adapted to play important roles in the very complex Taiwanese Rapid Earthquake Information Release System.

With the rapid increase in digital seismic instrumentation and telemetry in the past five-plus years, the complexity of running a seismic network has increased considerably. While seismic instrument manufacturers usually provide free software with their instruments for the basic acquisition and control functions, most do not provide complete processing systems, at least not without significant additional cost. Typically, medium-sized regional seismic network processing software must now not only acquire data from traditional analog sources but also integrate data from several different types of digital instruments using different communications protocols and different digitization rates and formats, and process all of these data together producing automatic location and magnitude information within seconds to minutes of an earthquake. Of course, follow-on manual analysis for verification, quality control, and research purposes is necessary, as are the usual reports and interpretations for the many clients served by a network.

Partly because of the complexity of data sources to and products from a seismic network and partly because networks are typically operated by research institutions, most networks in the U.S. have developed at least significant parts of their own recording and processing software. There are several "baseline" sets of programs which provide some degree of commonality, but even in cases where a certain package, such as CUSP, forms the bulk of the processing system, there are many local modifications and enhancements needed for the local mission. This "home brew" anarchy is certainly an inefficient use of limited network operating funds but may be somewhat inherent in the multijurisdictional, multipurpose missions of the networks. A review of the recording and processing systems reported for members of the Council of the National Seismic System (http://www.cnss.org/NETS) illustrates the diverse and nonuniform sets of software in use. The following table is an attempt to summarize these individual network reports (modified by information from other sources) into categories of processing software. There are no two networks using exactly the same software, but many report using some common parts. Some networks use significant parts of more than one basic system or have duplicate systems running at the same time, thus negating a one-to-one match of network to processing system in this table. While all networks have at least some local modifications or enhancements, those networks which have developed their own processing systems, either from scratch or a large combination of other parts, are combined under the category of home-grown in this table.

Table of Seismic Network-processing Software
# NetworksSystem NameComments
14Home-grownThis includes small networks with only a PC-based digitizing unit to the USNSN
6IASPEIMostly small to medium-sized networks with no automatic notification needs
6EarthwormOnly the automated part is included in Earthworm; manual analysis is done using other systems
6No ProcessingThese small networks either don't record digitally or report they do no processing
5CUSPUsed by the largest networks (mostly USGS)
3AntelopeSeveral others are currently testing this system
3Other CommercialAll are orphans

 

Even as network processing software entropy seems to be unstoppable, there are several signs of possible change. The struggling efforts of the Earthworm project are generating some common parts of processing systems at some networks. Currently fourteen networks report they are using some part of the Earthworm system, and it plays a major role at six networks. The U.S. National Seismic Network provides an umbrella for networks around the country. The USNSN (using its own home-brew processing system) can rapidly locate significant earthquakes (M > 3) and provide basic information to the whole country, both in support of and as back-up to the regional networks. It can provide some data communication services on its VSAT system between networks and in some cases within a regional network. It also recently has begun providing consulting services to regional networks wanting to use the Earthworm software. This baseline service of the USNSN, while still minimal, may help provide more commonality of processing procedures and software. Finally, there is the new player on the block, the Antelope system.

Antelope is really not that new. It derives fairly directly from the IRIS Joint Seismic Program (JSP) effort, which acted as the data collection and processing center for several regional style networks operated in central Asia. Danny Harvey and Dan Quinlan, while at the University of Colorado, developed a very nice set of libraries, utilities, and applications called the Datascope Application Package (DSAP) for use in processing the IRIS-operated array data. As IRIS developed a portable broad-band array, they needed to expand the capabilities of the JSP-supported software to the real-time world of such an array. Danny and Dan then added to their original package a software utility called ORB (Object Ring Buffer, not to be confused with the more common use of the acronym for Object Request Broker), which provided the real-time connection between field digitizers and the DSAP processing software. In 1997 Danny and Dan resigned from the University of Colorado to work full-time at the new company, Boulder Real Time Technologies, Inc. (BRTT), where they rewrote the ORB software and parts of the DSAP and added many new utilities and features to the combined system, which they called Antelope. Kinemetrics, Inc. now markets this software, which is being used at the new and very modern Saudi Arabia national network. IRIS still supports Dan and Danny through a contract with BRTT.

As of the writing of this column (December 1998) the ES has spent some time "playing" with the Antelope demonstration suite made available on CD-ROM by Kinemetrics. The completely functional distribution includes all the new stuff (needs a license key to run), old JSP stuff, and many contributed packages written by others, such as converters to go between Antelope and Earthworm. One's first impression is that it is a very nice distribution with great documentation and a nice user interface. The demo installation goes pretty smoothly, though an important environmental variable had not been set correctly and took a bit of puzzling to figure what to do. Once the demo is running, watching traces march across a computer screen is a favorite pastime of the Electronic Seismologist, and the Antelope display is lovely. As events occur (real waveform data from the Alaska regional network replayed from files with current time tags) one can see detections occur, event associations form, and earthquakes locate. One can pop up a very nice map facility to see catalog and epicenter plots, and there is an offline waveform analysis interface for repicking or editing phases and then relocating events. Those familiar with the public domain DSAP will recognize many of the analysis facilities, but the way the whole thing is hooked together with nice graphical control modules is new. After spending only a couple of hours with the demo and rarely needing to use any of the detailed documents, the ES could figure out how to do most of the common routine things, though he did get tangled up in how to reassociate or disassociate arrivals with different versions of an event. The most annoying aspect of Antelope during this playtime was the apparent very slow response (compared to the analysis software he is used to on the same computer) of the analyst review windows, such as "dbloc2" and friends. Perhaps the ES is drinking too much espresso and needs to slow down.

After playing with the demo for a while the ES wanted to see how it would work with his own data. Since the ES is running a complete Earthworm system and there is an "eworm2orb" module available, it should be easy to run Antelope in parallel. In practice this has not been trivial. Perhaps because of unfamiliarity with Datascope/CSS database tables and their interrelations, but also because of holes in the otherwise fine documentation, it has taken more time than hoped to get things going. At the end of December 1998, not all parts of it are yet operating, but the light at the end of the tunnel can be seen.

No matter how good the documentation and facilities of any network processing system are, the learning of a new one and configuring its operation from scratch can be a time-consuming operation. More time is needed for the ES fully to test Antelope for a good evaluation of it and a comparison to other systems. The net result is that the reader can look forward to more excruciating details of network recording and processing systems in the next Electronic Seismologist's column.

REFERENCES

Allen, R. (1982). Automatic phase pickers: Their present use and future prospects, Bull. Seism. Soc. Am. 72, 225-242.

Lee, W. H. K. (1992). PC-based Seismic Systems, Open-File Report, U.S. Geological Survey, January 1, 1992.

Lee, W. H. K. (1994). Realtime Seismic Data Acquisition and Processing, IASPEI software library, Vol. 1 & 2, IASPEI Working Group on Personal Computers.

Malone, S. (1998). Of cathedrals, bazaars, and worms, Seism. Res. Lett. 69, 407-409.

Murdock, J. N. and C. R. Hutt (1983). A New Event Detector Designed for the Seismic Research Observatories, U.S. Geological Survey Open File Report 83-785, 42 pp.

Stewart, S.W. (1977). Real-time detection and location of local seismic events in central California, Bull. Seism. Soc. Am. 67, 433-452.

http://www.brtt.com/ (Web page for Antelope system)

http://gldbrick.cr.usgs.gov/ (Web page for Earthworm system)


SRL encourages guest columnists to contribute to the "Electronic Seismologist." Please contact Steve Malone with your ideas. His e-mail address is steve@geophys.washington.edu.

Posted: 7 April 1999
URL's updated: 21 January 2003