Electronic Seismologist logo

ELECTRONIC SEISMOLOGIST

March/April 2005

Thomas J. Owens
E-mail: owens@sc.edu
Department of Geological Sciences
University of South Carolina
Columbia, SC 29208
Phone: +1-803-777-4530
Fax: +1-803-777-0906

Decision, Decisions, Decisions

Anyone who has followed the information technology world for the last decade is either exhausted or has learned to take the continuing hype of the "next big thing" with a healthy grain of salt. New technologies abound; even more common are their proponents, who declare with great confidence that all of our IT problems can be solved with this new technology. Guess what ... these folks, sincere as they may be, are wrong! Each and every "next big thing" has big limitations. New technologies most often arise from the need to solve a specific problem. As a result, they solve that problem well. Then they are often extended and applied to other problems, and the compromising begins. So, faced with a plethora of possible technologies and our own little problem to solve, what can we do? In most cases, the ES's eyes glaze over and he yearns for the days when FORTRAN and sh-scripts could do everything that needed to be done. Actually, it's possible that they still could ... but, since it's clear that the IT world is unlikely to embrace this retro solution, maybe we should explore how others handle decision making and technology selection in the IT world. In an ES column a while back, the SCEC team summarized their plans for a major IT research project focused on earthquake science. They're back this month with an excellent overview of their thought and evaluation process in selecting technologies for one aspect of their project, seismic hazard analysis. Key elements of their approach include (1) Know your own needs and goals; (2) Keep an open mind on possible solutions; and (3) Evaluate, Evaluate, Evaluate! Matching a technology to your needs is difficult. The strengths of most technologies are normally well known; the limitations are discovered only with careful evaluation. If you are doing something that is a little out of the ordinary, you need to be prepared to invest in prototype development that exercises multiple technologies. In seismology, moving around big chunks of data is often the "gotcha" that reveals limitations in technologies, but firewalls and legacy codes can also be challenging. As always, the ES encourages those of you developing specific applications to share your experiences in selecting a technology ... it could save your colleagues a lot of headaches!

Seismic Hazard Analysis Using Distributed Computing in the SCEC Community Modeling Environment

Philip Maechling1, Vipin Gupta1, Nitin Gupta1, Edward H. Field2, David Okaya1, and Thomas H. Jordan1

1. Southern California Earthquake Center
University of Southern California
Los Angeles, CA 90089-0742
Telephone: +1-213-740-5843
Fax: +1-213-740-0011
E-mail: maechlin@usc.edu, vgupta@usc.edu, niting@usc.edu, okaya@usc.edu, tjordan@usc.edu

2. U.S. Geological Survey
525 South Wilson Avenue
Pasadena, CA 91106-0001
Telephone: +1-626-583-7814
E-mail: field@usgs.gov
http://www.scec.org/cme/

Introduction

The Southern California Earthquake Center (SCEC) exemplifies the type of large, geographically distributed collaboration becoming more common in scientific research. The SCEC community is more than 400 scientists from more than 50 organizations throughout the country and abroad who work together on the problems of earthquake science. To support this distributed community and improve its ability to transform basic research into practical applications, SCEC has initiated a Community Modeling Environment (CME) under the auspices of the NSF Information Technology Research (ITR) program (Jordan et al., 2003). The goal of the SCEC/CME is to develop a collaboratory that can aid practitioners in selecting, configuring, and executing the complex computational pathways needed for physics-based seismic hazard analysis.

In conjunction with the CME project, SCEC and the U.S. Geological Survey (USGS) have jointly developed OpenSHA (Field et al., 2003), an open-source, object-oriented software framework for probabilistic seismic hazard analysis (PSHA). OpenSHA provides users with a number of well accepted and widely used PSHA software components, including attenuation relationships and earthquake rupture forecasts (ERF's). Earlier this year, we began integrating into the OpenSHA system one of the most sophisticated ERF's ever developed, the WGCEP-2002 model for earthquake probabilities in the San Francisco Bay area (WGCEP, 2003). This task presented us with some software development challenges that called for distributed computing. We investigated and evaluated a variety of technologies, and the lessons we learned in obtaining a useful and robust solution may be of interest to our colleagues.

Three factors motivated an implementation based on distributed computing: First, the WGCEP-2002 ERF is written in FORTRAN, and we wanted to access it from our existing, Java-based, OpenSHA applications. Second, this ERF requires a lot of computer cycles and memory. Third, and most significant, the OpenSHA framework was explicitly designed for scientific collaboration. The OpenSHA collaboration model envisions scientists developing their own attenuation relationships and earthquake rupture forecasts, which they will deploy and maintain on their own systems. If OpenSHA components can be implemented in a distributed manner, then collaborators can retain control over their own components while still making them accessible to others. The WGCEP-2002 ERF is an ideal pilot project with which we can validate this computing and collaboration model.

We began by identifying three basic requirements for our WGCEP-2002 ERF implementation: (1) Our Java-based OpenSHA client must be able to call the FORTRAN-based WGCEP-2002 ERF without rewriting the OpenSHA client or the ERF; (2) when an instance of the ERF is created, it must be created on a remote server, not on the local computer; and (3) the ERF, once created, must be network accessible and must support distribution of large (greater than 10 Mb) objects between the client and server.

Starting with these requirements, we considered four different distributed computing approaches: Java servlets, Web Services, CORBA, and Java RMI. The selection of these technologies was based on our existing computing environment and expertise. We recognize that this list excluded some potentially viable techniques, such as Remote Procedure Calls and Distributed Component Object Models.

We developed several versions of the WGCEP-2002 ERF as we worked with these different distributed-computing technologies. A summary of key technical characteristics of these technologies and some subjective evaluations of them are shown in Table 1. Based on this work, we selected a distributed-object technology called Java Remote Method Invocation (RMI) as the most appropriate technology. Using Java RMI, we successfully implemented a distributed version of the WGCEP-2002 ERF within the OpenSHA framework. For more information about the capabilities of the OpenSHA software, and for a description of the characteristics of the WGCEP-2002 ERF itself, please see the companion article, "Hazard Calculations for the WGCEP-2002 Earthquake Forecast Using OpenSHA and Distributed Object Technologies" (Field et al.).

TABLE 1

Comparison of Key Characteristics of Distributed Computing Technologies

 Java ServletsWeb ServicesDescriptionJava RMI
Supported Programming LanguagesJava and "wrapped" JavaMany computing languages supportedMany computing languages supportedJava and "wrapped" Java
Client and Server in Different Programming LanguagesNoYesYesNo
Required Server Side SoftwareServlet ContainerWeb Service Libraries and ContainerObject Request Broker (ORB)Java Virtual Machine, RMI Registry, and RMI Service
Distributed Object SupportNoNoYesYes
Network Ports UsedStandard Web Server PortStandard Web Server PortNonstandard Network PortNonstandard Network Port
Ease of ImplementationEasyMediumHardMedium
Expected Stability and ReliabilityMediumLowHighHigh

Java Servlets

We started our work by evaluating a Java servlet-based approach to the ERF implementation. A servlet is, in general terms, a program that extends a Web server's functionality (http://java.sun.com/products/servlet/). Servlets are frequently used on dynamic Web sites as alternatives to Common Gateway Interface (CGI) scripts such as Perl and PHP. In order to host servlets, we installed a servlet container (e.g., Apache Tomcat, http://jakarta.apache.org/tomcat/) on our server. We then modified our wrapped WGCEP-2002 ERF to work as a Java servlet and deployed it in the servlet container. Once our ERF was deployed in a container, OpenSHA clients could execute the servlet using HTTP requests.

A few aspects of our servlet solution had problems, however. First, and most fundamentally, client programs must invoke a servlet through a Web-oriented request and response interface. A request- and response-based interface, which is typically stateless, is substantially different from an object-oriented interface, which is typically stateful. Our OpenSHA clients needed substantial modifications before they could interact with our servlet-based ERF.

We also encountered problems transferring multimegabyte data sets between our ERF and our OpenSHA client. Both servlet containers and Web servers establish timeouts for open HTTP connections. When our HTTP-based ERF transfers reached the connection timeout, typically 5 minutes, the container shut down the connection and our client did not receive the full ERF return data.

Although these problems were surmountable, they suggested we might find a more appropriate solution, so we continued our investigation.

Web Services

Next we worked with Web Services (http://www.w3c.org/2002/ws/). A Web Service is, in simple terms, a server-based program that can be called from client programs anywhere on the Web. The key feature of Web Services is interoperability. Web Services and client programs can be written in different programming languages, run on different types of computers, and still work together.

To understand Web Services, it helps to reflect on the capabilities of the more familiar world of Web browsers and Web sites which can interoperate with each other regardless of the underlying hardware and software by using a standard communication protocol (HTTP) and a standard data exchange format (HTML). Web Services achieve programmatic interoperability using the same approach but different standards.

The Web Service standard communication protocol is Simple Object Access Protocol (SOAP), which is analogous to HTTP. The Web Service standard data transmission format is XML, which is analogous to HTML. Since all Web Service clients and servers are required to communicate using XML and SOAP, the platforms and languages used by each side do not matter. Web Service interfaces are described using a programming-language-neutral XML format called Web Service Definition Language (WSDL). A WSDL description of a Web Service describes, in a machine-understandable way, the interface to the Service, the data types in use, and the location (URL) of the Service.

Because program-to-program interactions can be significantly more complex than Web browser to Web server interactions, Web Services have more layers than just XML, SOAP, and WSDL. Web Service specifications also include Web Service registration and discovery services (UDDI), transaction support (WS-Transaction), and many other related standards. Each of these standards supports the goal of distributed computing with platform and language interoperability.

We spent some time working with Web Services. We deployed several simple programs as Web Services and wrote client programs to call these Services. To offer a Web Service, we needed Web Service Software Libraries, and a container, such as the Tomcat Servlet container, installed on our server. Web Services Software Libraries are software tools that provide several helpful capabilities. For example, these tools help create WSDL descriptions of code. They also help deploy Web Services on a server (e.g., Apache Axis, http://ws.apache.org/axis/) and provide XML processing routines.

We found, in many cases, that our existing programs with simple input and output data types could be deployed as Web Services without significant modifications. Although Web Service interfaces are XML-based, typically we did not need to rewrite our programs to input and output XML. Web Service software libraries provide routines that convert data types into, and out of, XML automatically.

One aspect of our Web Services we really like is that once a Service is deployed, it is very easy to write different types of clients to call the Web Service. We found that we could access a Web Service quickly from within an existing application program. We could also write stand-alone Web Service clients (for people who like to work on the command line), and browser-based clients, all without changing the Web Service.

When we tried to implement the WGCEP-2002 ERF as a Web Service, we encountered two significant problems: Complex input or output types are difficult to support in Web Services, and Web Services do not support large input and output parameters very well.

Regarding input and output parameter types, we found that Web Service software easily supports programs whose input and output parameters are primitive data types such as integers, floats, strings, and arrays of these types. If we wanted to exchange complex data types or custom objects through Web Services, however, we needed to customize our clients and services.

One way we modified our Web Services to support complex parameters was to write our own programs, called serializers and deserializers, which converted complex parameters into, and out of, XML. Alternatively, we could configure our clients and servers to send XML messages with binary attachments that contain complex parameters in binary format. The XML portion of the message indicates the type of binary object that is attached.

Regarding input and output parameter size, we found that Web Services had problems handling large input and output parameters. The Web Service design approach intentionally favors interoperability over efficiency. It is significantly less efficient to send parameters across the network in verbose, but standardized, XML formats than in efficient, but nonstandard, binary formats. Since our ERF parameters were very large, it was impractical to convert them into XML for transmission. It appears that the standard Web Service solution to large parameters is to exchange metadata about the object, such as a URL where the large parameter is stored. Then the program receiving the metadata retrieves the large parameter in a more efficient, non-XML-based, and possibly non-SOAP-based, manner.

Based on this work, we decided that Web Services were not a good way to implement the WGCEP-2002 ERF within OpenSHA. On the SCEC/CME Project, however, we continue to develop and deploy other application programs as Web Services when language and platform interoperability are important.

Common Object Request Broker Architecture (CORBA)

The third technology that we considered was the Common Object Request Broker Architecture (CORBA; http://www.corba.com/). CORBA is a set of software specifications and software tools designed to support platform- and language-independent, object-oriented distributed computing. While these design goals are very close to the goals of Web Services, the CORBA standards predate the World Wide Web by several years, and CORBA uses significantly different technology than Web Services to achieve its ends.

CORBA achieves its platform independence and language interoperability through two main elements: Program interfaces are described in language- and platform-independent format called Interface Definition Language (IDL) (note: CORBA IDL is not the same as Interactive Data Language IDL); and communication between programs in a CORBA-based system is managed by a middleware component, called an Object Request Broker (ORB) (note: a CORBA ORB is not the same thing as an Antelope ORB). Clients contact an ORB when they want to access a distributed object. The ORB is responsible for creating or locating an instance of the requested object. The ORB is also responsible for reliably transmitting data between the client program and the object.

CORBA technology is used in several seismological software projects, most notably in the seismological software and data transmission framework called FISSURES developed by the University of South Carolina and IRIS (http://www.seis.sc.edu/software/Fissures/). FISSURES, and consequently CORBA, are used as a component of the IRIS Data Handling Interface (DHI) (http://www.iris.washington.edu/DHI/index.html). Two recent seismological development efforts that use CORBA are the California Integrated Seismic Network (CISN) Earthquake Display and the Synthetic and Observe Seismogram Access (SOSA) tool under development by IRIS and the SCEC/CME.

Several aspects of CORBA made it attractive for our OpenSHA ERF implementation. First, CORBA is fully object-oriented and is designed to support distributed objects. We found that while both servlets and Web Services support distributed computing, they do not directly support distributed objects. Also, the data communication capabilities of CORBA would handle our large objects. In addition, CORBA would support our Java interfaces without problems. Finally, CORBA technology is solid, well tested, and stable.

The disadvantages of a CORBA approach tended to be practical rather than technical. CORBA has a reputation as a "heavyweight" software technology because it has a large number of features, many of which are not required for most programs. Also, even for experienced object-oriented software developers, it can require a significant amount of time to learn. In addition, CORBA requires an ORB to act as an intermediary between programs in the system. While there are free, open-source ORB's, a CORBA-based ERF solution would need to incorporate an ORB on the ERF server and possibly on the OpenSHA client. We wanted to avoid this additional infrastructure requirement if possible.

While CORBA isn't receiving much academic and industry attention these days, it is a stable and well proven system. CORBA might be a good choice for applications that require highly reliable, distributed, cross-language, and cross-platform interoperability.

Java Remote Method Invocation (Java RMI)

The last distributed computing technology we examined was Java Remote Method Invocation (RMI; http://java.sun.com/products/jdk/rmi/). The Java RMI system was designed specifically to support distributed Java objects and consequently provides solutions to many distributing computing problems, including data communication, security, and tools for deploying distributed objects. If you are working in a pure Java application programming environment, Java RMI is a solid, comprehensive, and well tested technology available for use.

As with the other technologies, a certain amount of software infrastructure is required to use a Java RMI approach. Both the client and the server must install a Java Virtual Machine (JVM). Also, the server must run two programs: an RMI Registry and an RMI Server. An RMI Registry is a program that runs on a server and helps a client find a new object. An RMI Server is a program that creates new objects for clients on request and contains the distributed object code. Both the RMI Registry and the RMI Server must be started before a client can use the distributed objects.

We converted our wrapped Java WGCEP-2002 ERF into an RMI object without problems. Once we deployed and began to use our distributed object, however, we recognized an important architectural issue. Distributed objects are stateful, as are local objects. Once distributed objects are created, they maintain their state as long as they exist. Each OpenSHA client assumes the ERF object it is using will not change its state unexpectedly, so it is important that ERF's are not shared between clients. To ensure that each OpenSHA client interacted with its own ERF object, and not a common shared object, we needed to implement an ERF Factory on the server. Our ERF Factory is a small Java program that receives incoming ERF creation requests. The ERF Factory is responsible for creating a new ERF for the exclusive use of each client.

We noted that Java RMI implements an efficient data transfer protocol that allows us to move large amounts of data without problems. Java RMI also uses a technology called object serialization that converts objects into bit streams, sends the bits across a network, and reconstructs the objects on the receiving side. This allowed us to transfer our OpenSHA objects across the network without reformatting them.

One aspect of the Java RMI system we really liked was that our Java clients worked with distributed RMI-based objects as if they were working with local objects. The Java RMI approach required only minimal changes to our OpenSHA client code.

Although the Java RMI implementation of the ERF meets all our requirements, it has certain disadvantages. For example, the RMI registry and the RMI Server must be installed and running on our server for this system to work. Also, the OpenSHA client to RMI registry communication occurs on nonstandard network ports, which can cause firewall issues.

Despite these potential drawbacks, our Java RMI-based ERF implementation meets each of our original requirements. The Java RMI-based implementation of the WGCEP-2002 ERF is successfully deployed on a SCEC/CME server and is now routinely used by scientists. Several OpenSHA applications use these distributed, RMI-based, WGCEP-2002 ERF objects, including a hazard curve plotting application and a hazard map application.

Discussion

The work described here resulted in a distributed, robust, and easy-to-use OpenSHA implementation of the WGCEP-2002 ERF. More importantly, it validated the collaborative model that the OpenSHA designers intended. When seismic hazard analysis codes, such as this ERF, are deployed as OpenSHA objects, they can be combined with other OpenSHA components in very powerful ways. For example, the WGCEP-2002 ERF can now be easily combined with any of the attenuation relationships currently supported by OpenSHA. The OpenSHA distributed-computing model encourages distributed research by allowing developers of new SHA components (e.g., SCEC's "RELM" working group that is creating a variety of alternative ERF's; http://www.RELM.org/) to maintain control of their geophysical models while providing the outside world with access to them. In other words, we have an emerging collaboratory. Additional Information Additional information about the SCEC/CME Project is available on the SCEC/CME Project Web site (http://www.scec.org/cme/) and on the OpenSHA Web site (http://ww.opensha.org/). We have posted the source code for several example programs that utilize the technologies discussed in this article at http://epicenter.usc.edu/ES/examples. Please contact us if you have questions or would like additional information.

REFERENCES

Field, Edward H., Nitin Gupta, Vipin Gupta, Michael Blanpied, Philip Maechling, and Thomas H. Jordan (2005). Hazard calculations for the WGCEP-2002 earthquake forecast using OpenSHA and distributed object technologies, Seismological Research Letters 76, 161-167.

Field, Edward H., Thomas H. Jordan, and C. Allin Cornell (2003). OpenSHA: A developing community-modeling environment for seismic hazard analysis, Seismological Research Letters 74, 406-419.

Jordan, Thomas H., Philip J. Maechling, and the SCEC/CME Collaboration (2003). The SCEC community modeling environment: An information infrastructure for system-level science, Seismological Research Letters 74, 324-328.

WGCEP (Working Group on California Earthquake Probabilities) (1988). Probabilities of Large Earthquakes Occurring in California on the San Andreas Fault, USGS Open-File Report 88-393.


SRL encourages guest columnists to contribute to the "Electronic Seismologist." Please contact Tom Owens with your ideas. His e-mail address is owens@sc.edu.

 

HOME

Posted: 24 June 2005