Electronic Seismologist logo
July/August 2000

Steve Malone
E-mail: steve@geophys.washington.edu
Geophysics, Box 351650
University of Washington
Seattle, WA 98195
Phone: (206) 685-3811
Fax: (206) 543-0489


Seismological data collectors face a challenge any time they want or need to acquire new data or use data in new ways. The inevitable march of technology also provides new opportunities to get more and better data. Changing from familiar data collection and processing techniques to new ones is a challenge in every case, but no more so than when the organization operates a permanent seismic network whose products must continue through the transition without break and with some degree of uniformity and quality. The Electronic Seismologist (ES) has been known to participate in such transitions in the past. Having orchestrated, assisted with, or just gone along with about half a dozen such transitions in the past 30+ years, he is aware that it is not an activity for the faint of heart. A recent such transition, watched with interest by the ES, took place at the University of Nevada at Reno. There, not only were the computer hardware and software changed, but two fairly independent networks were combined into one, and the waveform archiving policy was greatly expanded. Having a soft spot in his heart for Nevada earthquakes in general and the UNR in particular (the ES helped to install the first analog telemetry station as a student there many moons ago), the ES is happy to host a guest column by David von Seggern, Glenn Biasi, and Ken Smith. These seismologists spearheaded and seem to have survived this transition with flying colors. Faced with operating two parts of their network independently because of incompatible legacy software, Y2K fears, and a need to improve efficiency, there appeared to be no evolutionary solution, and a major change was needed. It would seem that adopting the Antelope system (see the ES columns on this system in SRL Vol. 70, numbers 2 and 4) saved the day. Their report paints a glowing picture of the success of Antelope in their situation and might be of interest to those who will face or have faced similar transitions in other places.


D. H. von Seggern, G. P. Biasi, and K. D. Smith
Nevada Seismological Laboratory
University of Nevada
Reno, Nevada

On 1 January 2000, the Nevada Seismological Laboratory (NSL) officially began operation of an integrated, 145-station, digital and analog network within a single processing software system. This event marked the culmination of a long period of planning, evaluating, testing, and minor software development to combine the data streams from two fairly independent networks into a unified system. Extensive software development was avoided by our adopting the Antelope real-time acquisition and processing system, a product of Boulder Real-Time Technologies, Inc. (BRTT). This report briefly summarizes the history of network operations at NSL that led to this transition, some of the issues we encountered in implementing a new software system, and some of the benefits we have realized as a result.


Primary seismic network operations at the University of Nevada at Reno since 1989 have been based on the CUSP network processing system developed at USGS in the early 1980's. CUSP at NSL mainly processes vertical-component sensors delivered by analog telemetry. In-house picking and location programs are used to refine CUSP automatic locations. Through the end of 1999 we used this system to locate over 37,000 earthquakes and catalog several significant aftershock sequences. The primary VAX computer of the CUSP system logged seven years without a failure. However, pressures to modernize began in the mid-1990's. The ease and power of the Unix operating environment prompted us to look for Unix-based solutions to the challenges of integrating analog and digital instruments.

In February of 1995 modernization at NSL got underway when we began operating a separate network of Reftek digital recorders in southern Nevada for DOE's Yucca Mountain Project. Lacking the means to integrate packetized three-component data within CUSP, we developed an ad hoc processing system around available tools. At the time we considered an Earthworm-based system, but the real-time picking and location programs did not seem designed to accommodate asynchronous packetized data. Earthworm had no capability for interactive event relocation, database interfaces, or data archiving. On the other hand, we found that our Yucca Mountain post-processing, routine analysis, and catalog requirements could be met by importing our data into a Datascope CSS 3.0 database and using the picking and location tools available from the Joint Seismic Program Center. The acquisition system was composed of in-house software that managed the high-speed serial output from a set of Reftek 112 modem racks and utilized basic PASSCAL routines for unpacking compressed data. It was a relatively simple task to adapt to the Datascope environment. Our "data server" provided an interface to clients for two-way communication to Reftek in the field.

We maintained our near-real-time notification needs through CUSP. By the fall of 1999, the digital network had expanded to a total of 38 stations, recorded independently from the 107-station CUSP network (Figure 1). This digital station configuration includes dense local coverage in southern Nevada around Yucca Mountain, with other more recent stations installed from south of Las Vegas to north of Reno. Prior to 1 January of this year, only digital sites around Yucca Mountain were actually being used in routine earthquake hypocenter determinations; this catalog was part of our work-scope for the DOE and was independent of the CUSP catalog. For programmatic reasons, the separation of the networks had some merit, but it was always clear that a joint network would be more efficient. Moreover, we were underutilizing the digital stations outside of the Yucca Mountain area. The main constraints from the beginning were software suited to the needs of integrated network operations and the costs of implementation.

  Figure 1. Seismic stations in the network of the Nevada Seismological Laboratory. The circle around the Nevada Test Site shows roughly the extent of the Yucca Mountain digital monitoring network.


Creeping hardware failures, station losses within the analog network, and uncertainty about CUSP performance through Y2K prompted a decision to integrate operations on 1 January 2000. Our choice of Antelope followed an extensive review of network functional requirements and of NSL resources. A Unix operating system was required because of the power and breadth of available scripting, networking, and file management tools. Our positive experiences with the CSS 3.0 database made it clear that database management must be maintained through acquisition, post-processing, catalog manipulations, and archiving. Implementation of the tables in ASCII format was not a strict requirement but was strongly preferred because it simplifies administration and allows tables to be viewed and accessed with text-based tools. As with all modern networks, we required the system to include general data-viewing facilities and the capability to pick and relocate events, plot origins in meaningful ways, facilitate archiving, and support a range of notification capabilities. We required that data formatting and exchange with other networks be simple and straightforward. Options were evaluated in terms of what would be required to achieve such a system, recognizing that with all systems we need to add some functionality from in-house programming resources. Considering the costs for suitable software programming talent to meet these requirements, Antelope was seen as the least expensive, least risky, most comprehensive, and most schedule-responsive choice among available alternatives.

Transition to the Integrated Network

With the transition to Antelope on 1 January 2000, the resulting network became greater than the sum of its parts for both better and worse. Part was worse because, where CUSP waveforms were limited to relatively short segments around trigger events, the Antelope system absorbed the entire, unedited, continuous stream of data. We decided to archive the entire stream, although the possibility exists to apply event cuts and discard the nonevent data. With all the stations, the raw volume comes to over 5 Gb/day (1.3 Gb/day compressed), enough to tax even rapid disks and large memory caches. The volume of data is such that it took some time to develop strategies for efficient data review and archiving. Routinely looking at all 300+ channels (includes 6-channel Reftek data and channels from adjacent networks) is out of the question, and viewing even subsets of the data, as indicated by automatic locations, can be slow. Our inexperience with operational details led to a serious backlog in analysis. By March, even a 50-day disk FIFO of complete data was barely deep enough as the routine picking and locating activities fell far behind. We were also hurt by some critical "operator" errors in the first month or two.

In retrospect, our idea of simply scaling up from our Yucca Mountain network experience to the entire Nevada network was overly optimistic. The heterogeneity of the station coverage and the large number of stations required numerous trial-and-error adaptations. For example, the Antelope automatic location algorithm requires a single, uniformly spaced grid over the entire network, but our network is anything but uniform. We were able to solve this problem by thinning the station coverage in densely covered places like Long Valley and Yucca Mountain and by operating a separate Antelope system for notification. This may seem inefficient, but Antelope handles these types of solutions well.

After the dust settled, we wanted to check the quality of our earthquake bulletin. CUSP processing was maintained into the new year as a back-up and to facilitate comparison with Antelope. Seismicity plots in Figure 2 show events for the month of January after analyst review and relocation. In collating the two catalogs, we found that 76 of 81 CUSP events matched Antelope events. Of the five not produced via Antelope, two were apparently genuine misses for the Antelope automatic event detector and three occurred when the Antelope system was down. On the other hand, 544 of 620 Antelope earthquakes did not match any CUSP earthquake. Because 132 of these were within the circular region around Yucca Mountain on Figure 2, they would have been reported by our pre-2000 operation of the dense network there. The figures show that the additional Antelope earthquakes are distributed over most of the network but are especially notable in the central and southern part of Nevada. The additional earthquakes predictably include many below the CUSP detection threshold but also include some larger events. For the earthquakes that matched, we found excellent agreement between CUSP and Antelope hypocenters, as relocated by analysts with independent tools.

  Figure 2. Epicenters of earthquakes reported by (A) CUSP and (B) Antelope processing in January 2000.


The fact that our previous processing schemes would have recovered 213 earthquakes (81 CUSP + 132 YM) in the same period that Antelope processing netted 620 implies a threefold increase in analysis load. The increase is due to the greater effective station density (i.e., digital stations add to the analog coverage, and vice versa), to the more generalized association algorithm in Antelope software, and to the intrinsically lower noise of the digital stations. For example, Figure 3 compares the recordings of collocated digital and analog stations near Yucca Mountain for a local earthquake. The S/N ratio of the digital record is a factor of ~5 better than that of the analog site; this ratio corresponds to a decrease in the estimated detection threshold of about 0.7 magnitude units at this site. Thus, a distributed digital network reduces the detection threshold more than might be assumed from just an increase in the station density. To test whether the increased network sensitivity is indeed an artifact of the greater density of digital stations in southern Nevada, we also compared the analysts' catalogs north of 38ŻN, where there are fewer digital stations. CUSP processing produced 37 events versus 107 for Antelope. This still implies a tripling of analysis effort. We were not prepared for the increase in workload attending this increase in network performance.

  Figure 3. Comparison of vertical-component traces from collocated analog (WCT) and digital (WLD) stations. Sensors are of comparable quality and response.



What are some of the key gains that we have made by making the transition to integrated operations? Clearly the integrated network is providing a more complete view of seismicity in western Nevada. It is reasonable to expect improved estimates of fault activity, perhaps identification of new faults, and a better sense for interseismic processes. We now have real-time displays for data viewing, quality control, and communications confirmation. The system is approachable by every person in the lab, from the technicians to the director. Auxiliary displays for public outreach or "helicorder" applications are readily configured. Locations are improved over CUSP due to the greater effective station density and the availability of horizontal-component S-wave picks. Because the digital stations stay on scale, local magnitude Ml can be estimated where only duration magnitude Md was previously available. Both analog and digital data now reside in a single CSS 3.0 database, and the data are accessible by a variety of standard Unix tools. The database is readily accessed and manipulated by powerful application toolboxes; we have extensively used those in the Perl, C, and Unix shell environments. Diagnostic and development tools include programs to clone data streams and to test computers, and the ability to replay recorded data through an independent processing stream to test parameters and program settings. With the CSS 3.0 database and Antelope tools it is possible to have a "dynamic network" with temporary additions or modifications at any time. Thus, portable sites set out for an aftershock sequence can be incorporated seamlessly, after the fact, into the processing of the sequence. We have already done this in two cases: (1) the Frenchman Lake earthquake on the Nevada Test Site (01/27/1999) and (2) the Scotty's Junction earthquake near Beatty, Nevada (08/01/1999).

Importing data from neighboring networks into the new system deserves special mention. This ability solved a nagging problem with signals associated with the Hector Mine, California aftershocks. Mispicking of emergent regional phases caused numerous spurious auto-locations in southern Nevada near Las Vegas and adjoining California. By importing a few channels from the Anza array in southern California, Hector Mine events were effectively removed from southern Nevada. Because Anza also runs Antelope, it was necessary only to add the station references to the database and include one additional process in a start-up file. Being able to import data from eastern and northern California stations would have similar beneficial effects. Two-way sharing of data with the NSN has been implemented with USGS help using a local Earthworm process that easily interfaces with the Antelope system. We expect NSN, northern California, and southern California stations will help constrain location and magnitude estimates of regionals on all sides of the NSL network, and we expect our stations to help neighboring arrays similarly. We find it easy to interface with Earthworm, and although aspects of the Antelope system are proprietary, data exchange capabilities are open and implementation is fairly straightforward.


The cost of the transition? Because it was actually done over a long period, culminating on 1 January 2000, and because it involved so many of the lab's personnel, cost is not easy to estimate. The intense preparations in late 1999 and the concentrated problem-solving phase of early 2000 may have taken roughly 5 man-months of time. Our experiences should reduce the learning curve for other networks, however. This implementation occurred without the benefit of a single full-time programmer or system administrator. The transition occurred in the context of our other job responsibilities with no special funding for this effort. We profited greatly from a positive working relationship with the University of Alaska at Fairbanks, which also operates an Antelope system.

For hardware we have substantially dedicated a 360 MHz Sun Ultra 60 with two CPU's to the real-time acquisition system. A 20-station subset of the NSL network is running for a special project on a six-year-old Sparc 20, so a high-end data collection machine is not a strict prerequisite. Three Sun Ultra 10's were purchased for analysis work but are rarely taxed. A total of 108 Gb of disk space was also purchased for Antelope operations. Total cost of these items was roughly $38,000. Not included in this estimate is substantial support provided by USGS in the form of an NT digitizing computer, a GPS-IRIG timing system, and finances for the Antelope license. These latter items add another $65,000.

Although some additional kinks need to be worked out, integrated analog-digital operations have brought tremendous advantages. Statewide detection thresholds are clearly lower, fault activity is much better delineated, and we have much more flexible notification and reporting tools. Data exchanges are already improving regional event locations and providing a basis for cooperation with all neighboring networks. We are now able to make waveform contributions to data centers at NSN, the Berkeley Data Center, and the IRIS DMC. Internally, the current, easy accessibility of data, both waveform and parametric, should enable an expanded usage of our own data for network and research purposes. We now have a state-of-the-art, Unix-based training ground for students interested in network seismology. The Unix base also allows us to perform more easily our outreach functions to the seismological community and to the public. Our experiences during this transition taught us a number of lessons and have provided us with new tools and insight which we are happy to share with other networks anticipating a transition.


Kent Lindquist of the University of Alaska has shared crucial software modules with us, chiefly the interfaces with Earthworm. Roger Hansen, also of the University of Alaska, has provided important assistance in our transition efforts. Diane dePolo, chief analyst of NSL, has shouldered the increased workload and contributed much practical expertise in operations. Wally Nicks, also of NSL, has provided superlative engineering design and implementation during the entire transition. Alex Bittenbinder and Barbara Bogaert, both of USGS, have helped with the Earthworm digitizer installation and with our NSN interface.

SRL encourages guest columnists to contribute to the "Electronic Seismologist." Please contact Steve Malone with your ideas. His e-mail address is steve@geophys.washington.edu.

Posted: 4 August 2000