Which type of computer is best used in nuclear weapon testing

As president, I will reach out to the Senate to secure the ratification of the CTBT [Comprehensive Nuclear-Test-Ban Treaty] at the earliest practical date and will then launch a diplomatic effort to bring onboard other states whose ratifications are required for the treaty to enter into force.
—Barack Obama, September 10, 2008

As this article goes to press, Iran’s nuclear program is rapidly expanding its capacity to enrich uranium. The terrorist attacks in Mumbai, India, last November have once more raised the specter of a nuclear weapons exchange between India and Pakistan—a “regional war” that could kill tens of millions of both countries’ citizens and lead to severe change in global climate. North Korea, having joined the nuclear club with its first successful explosive test of a fission weapon on October 9, 2006, has reportedly separated enough weapons-grade uranium to build at least half a dozen atomic bombs. Eight countries have openly tested nuclear weapons, and Israel is presumed to have them as well. The possibility that terrorists could get their hands on such weapons is the worst nightmare of the U.S. Department of Homeland Security and its counterparts around the world.

Yet there are hopeful signs for reducing nuclear tensions as well. By the end of 2008, 180 countries had signed the Comprehensive Nuclear-Test-Ban Treaty (CTBT), which forbids all nuclear explosions, including the explosive testing of nuclear weapons. That treaty, adopted by the United Nations General Assembly in September 1996 and promptly signed by President Bill Clinton and many other world leaders, aims to restrict the further development of nuclear weapons by countries that have them and to prevent countries that do not possess them from building them with any confidence that the devices will work on the battlefield.

Even though the CTBT has not yet come into force, every nation that signed it—including the U.S. and Russia—has maintained a moratorium on nuclear weapons testing at least since the U.N. voted to adopt it. (The three nations that have tested nuclear weapons since 1996—India, North Korea and Pakistan—have not signed the treaty. In the U.S. this moratorium on testing has continued despite serious opposition to the treaty itself. In 1999 the U.S. Senate declined to give its constitutional “advice and consent” to the ratification of the agreement, and soon after the 2000 election President George W. Bush declared the CTBT not to be in the interests of national security.

The reason some senators voted against the treaty was concern about whether adequate tools exist for detecting attempts at clandestine nuclear testing—and thereby pinpointing treaty violations. Why renounce testing, the argument goes, if the U.S. cannot tell whether other countries are cheating? While we sleep, other countries could secretly conduct tests that would increase their ability to harm the interests of the U.S. and its allies.

In our view, those concerns about monitoring are groundless—and have been for several years. The scientific and technical community has developed a well-honed ability to monitor militarily significant nuclear test explosions anywhere in the world, above ground or below, and to distinguish them from mine collapses, earthquakes, and other natural or nonnuclear phenomena. For example, the yield of the North Korean test conducted underground in 2006 was less than a kiloton (the equivalent of 1,000 tons of TNT). Yet it was promptly detected and identified. Given such demonstrated capabilities, as well as continuing improvements in monitoring, the concerns about clandestine nuclear testing no longer provide defensible grounds for opposing the CTBT.

Learning What to Look For
The science of monitoring nuclear explosions is as old as nuclear testing itself. From the beginning, the major rationale for the U.S. to monitor was to collect basic information about the capabilities of potential adversaries. A second important reason has been to support international treaties on nuclear arms control. If each country that is party to a comprehensive test ban has reason to believe that any attempt to hide a nuclear test will very likely fail, the fear of international sanctions may deter the country from testing at all. More than 2,000 explosive nuclear tests have been conducted since the end of World War II—in the atmosphere, underwater and underground. From that record investigators have gained vast experience in acquiring and interpreting the signals of a nuclear blast.

Nuclear test explosions generate a variety of potentially detectable signals. An explosion in the atmosphere, for instance, emits an intense flash of light, which can be imaged by satellite. The roar of an explosion quickly dissipates at frequencies in the range of human hearing, but at “infrasound” frequencies—lower than 20 hertz—sound waves travel vast distances in air. Infrasonic “listening” posts equipped with microbarometers detect the very small changes in atmospheric pressure that make up the infrasound signal.

Radioactive isotopes of certain stable elements are created by all nuclear explosions, and in an atmospheric test they are blown high into the air as gases. As they cool, some of them, such as radioactive xenon, remain in the gas phase as a telltale sign of a nuclear explosion. Others condense and combine with dust to form particles that can drift around the world. As early as 1948, the U.S. Air Force monitored American atmospheric test explosions in the Pacific and confirmed that such radioactive particles are big enough to collect by pumping air through filter paper similar to that used for making coffee.

Radioisotope detection soon proved its worth. On September 3, 1949, a WB-29 bomber flying east of Kamchatka Peninsula gathered data proving that, four days earlier, the U.S.S.R. had become the second country in the world to test a nuclear device. The mix of isotopes in the debris—notably plutonium and uranium 238—told a story of its own: the Soviets had tested
a bomb that was almost an exact copy of the 21-kiloton explosive the U.S. had dropped on Nagasaki.

Quite early in the U.S. nuclear program, explosions were tested underwater as well as in the atmosphere. Sound travels very efficiently in water, particularly when the sound energy is trapped by slight changes in temperature and salinity that define the so-called sound fixing and ranging channel: the SOFAR layer. It became obvious that underwater explosions with a yield as small as a few millionths of a kiloton could therefore be monitored with hydrophones, or underwater microphones, by placing them near the SOFAR layer in seawater between 2,000 and 4,000 feet deep.

Seismic Monitoring
In 1963, following long and intense negotiations, the U.S., the Soviet Union and the U.K. (the first three members of the “nuclear club”) signed the Limited Test Ban Treaty. The LTBT banned nuclear testing in outer space, in the atmosphere and underwater. Parties to the treaty, however, could still test nuclear explosions underground. For that reason, the information conveyed by seismic waves—elastic wave energy that travels through Earth as a result of an im­­pact, collapse, slippage, explosion or other force that impinges on the planet—quickly became a major focus of the monitoring community. Fortunately, the sensors needed for detecting earthquakes can do double duty in detecting bomb blasts. But learning how to distinguish earthquakes from bomb blasts took several years, and refinements of that work continue to this day.

The main difficulty arises from the great variety and number of earthquakes, chemical explosions and other nonnuclear phenomena generating seismic signals every day. Any good monitoring network cannot avoid detecting those signals. Worldwide, for instance, more than 600 earthquakes a day eventually find their way into an international summary report, and mining operations in industrialized countries explode millions of tons of blasting agents a year. In all, about 25 seismic events above a magnitude of four take place every day, and that number goes up by a factor of about 10 for each drop of one unit in magnitude (say, from 25 to 250 events a day for a drop in magnitude from four to three).

At most locations on Earth, magnitude 4 corresponds to an explosive yield of less than a kiloton for an underground explosive device packed inside a small cavity in hard rock, from which seismic signals radiate efficiently. In other locations the rock is softer and more of the energy from the explosion is absorbed, reducing its measured seismic magnitude. Some policy makers have worried that a country might try to reduce the seismic signal by modifying the immediate environment of the test. For example, a large cavity hollowed out of rock could partly muffle the seismic waves from a blast, but for any militarily useful test explosion the cavity would have to be so big it would collapse or attract attention in other ways—for example, the excavated material would have to be concealed from satellites. The risk of discovery would be very high.

In practice, with seismic monitoring alone, all nuclear explosions down to yields of one kiloton can be detected with 90 percent reliability by examining between about 50 and 100 seismic events a day. To detect nuclear explosions with lower yields, the number of seismic events that must be examined goes up. Even one kiloton, however, is quite small for a nuclear explosion, and according to a 2002 report by the U.S. National Academy of Sciences, a test of that size would be of little use to a testing country attempting to make larger nuclear weapons—particularly if the country had little prior experience with nuclear testing.

Where to Focus, What to Ignore
Monitoring a nuclear explosion begins with the detection of signals, followed by an attempt to gather and associate all the signals recorded by various monitoring stations that originate from the same event. The final steps are to estimate the location of the event, primarily from the differences in the arrival times of signals at different stations, and to identify it. For example, did it have the characteristics of a meteor breaking up in the atmosphere, a mining blast, a test of a nuclear weapon? And if the latter, how big was it? What was its yield? What country carried it out?

The vast majority of seismic events can be classified automatically by computer algorithms; only the hard cases are flagged by the software for human intervention. Specialists have been monitoring earthquakes and mine blasts for many years and have thereby become well acquainted with the way many of their features are reflected in the seismic record. That knowledge, in turn, has helped inform efforts to identify nuclear test explosions. In particular, several kinds of seismic events became touchstones as protocols were developed for identifying a particular event as a nuclear explosion.

One kind of event was a series of mine collapses—one in 1989 in Germany and two more in 1995, one in Russia and the other in the U.S. Seismic stations throughout the world detected all three, but the data raised concerns because at great distances the classic method of distinguishing explosions from other seismic events was incorrectly suggesting the events were underground explosions. In that classical method, seismologists compare the strength of long-wavelength seismic waves traveling over Earth’s surface with that of body waves, which pass deep through the planetary interior. For example, a shallow earthquake and an underground explosion might set off body waves of the same strength, but if so, the surface waves from the earthquake would be significantly stronger than they would be for the underground explosion.

A closer analysis of the seismic waves from the mine collapses showed that those waves could not have come from an explosion, because they began with a trough rather than a peak: the ground had initially moved inward toward the source rather than outward, just as one would expect from a mine collapse. The episode was important because it showed that such an event could be reliably distinguished from an underground explosion on the basis of seismic recordings alone.

A second event illustrated the importance of the seismic distinction between two kinds of body waves for monitoring nuclear explosions. In 1997 a small seismic shock of magnitude 3.5, along with an even smaller aftershock, was detected below the Kara Sea, near Russia’s former nuclear test site on the Arctic island of Novaya Zemlya. Were the Russians violating their obligations as a signatory of the CTBT?

The surface waves from the event were too small to measure reliably, and so once again the classic method of identifying an explosion—comparing the strength of the long-wavelength surface waves with that of the body waves—could not be applied. But the detection of “regional” seismic waves, which pass through the upper mantle and crust of Earth and which can be measured within about 1,000 miles of an event, resolved the issue. They enabled seismologists to distinguish compressional, or P, waves from shear, or S, waves generated by the event. (P-waves travel as oscillating regions of compression and rarefaction along the same direction in which the waves propagate; S-waves oscillate at right angles to the propagation direction.)

It was known that the P-waves from an explosion are typically stronger than the S-waves, but that distinction was just beginning to be applied at frequencies above five hertz. This time the measured ratio of the strengths of the P- and S-waves at high frequency—and the fact that the main shock had an aftershock—showed that the Kara Sea event was an earthquake.

More Eyes to Catch Cheaters
A third touchstone event, the North Korean nuclear test explosion of October 9, 2006, illustrated the importance of recording seismic waves as close as possible to their source. The blast left traces on sensors worldwide even though its yield was estimated at less than a kiloton. But regional seismic data were required to determine that the signals came from an explosion and not from an earthquake. In the event, the world was well prepared. Several seismic stations were close by, including one in the network of the International Monitoring System (IMS), the CTBT’s own system for monitoring nuclear explosions.

After the seismic detection of the Korean test and the announcement of the test by North Korea, radioactive matter in the air and on the ground in Asia, as well as downwind across the Pacific Ocean at an IMS station in Canada, decisively confirmed the explosion as nuclear. Detecting the radioactivity was itself highly reassuring. The topography of the North Korean test site suggests that the explosion was deeper than most other subkiloton tests. Yet the test still leaked radioactive material.

Experience with these and other special seismic events has shown that the best seismic data for resolving a specific monitoring issue can sometimes come from stations that are not part of any treaty-monitoring network. Those stations, built with other goals in mind, can provide the dense coverage that makes it possible to strengthen the evidence derived from dedicated monitoring networks. Monitoring stations in the Korean region, for instance, are so dense that underground explosions with a yield as low as a few percent of a kiloton can be detected there.

Well-tested networks of seismic stations for rapidly analyzing, assembling and distributing large amounts of seismic data already exist, quite independently of the IMS. Thousands of seismometers have been set up throughout the world to evaluate earthquake hazards and to determine our planet’s internal structure. In the U.S., the U.S. Geological Survey and the Incorporated Research Institutions for Seismology, a consortium of more than 100 American universities, are jointly building and operating seismic data systems. As of the end of 2008, IRIS was receiving current seismic data from 71 networks that operate 1,797 stations, including 474 outside the U.S. An international group, the Federation of Digital Seismic Networks, plays a huge and still growing role in the data collection. Such networks are well suited to picking up unanticipated nuclear test explosions, as well as high-quality regional signals from events that might seem suspicious if they were analyzed by a sparse global network alone. Those data can thereby supplement data from the IMS and the various national treaty-monitoring networks.

One network of particular note among all the foregoing networks is the monitoring system the U.S. still maintains specifically for detecting nuclear explosions. The Atomic Energy Detection System (AEDS) is operated by the Air Force Technical Applications Center (AFTAC) out of Patrick Air Force Base in Florida and includes an extensive global network of seismometers. AFTAC reports on the data from the AEDS network within the U.S. government. If the CTBT finally goes into force and the AEDS or some other national facility detects a suspicious event, such data can be presented in an international forum, thereby augmenting information gathered by the IMS.

How Low Must You Go?
Even though existing technologies can ferret out rather small bomb tests and technical advances in monitoring will undoubtedly continue, one practical caveat must be made. It is obviously not possible to detect explosions of every size, with 100 percent reliability, all the way down to zero explosive yield. In this sense, monitoring is imperfect. But does it really matter that a technologically sophisticated country could perhaps conceal a very small nuclear explosion from the rest of the world, even though the explosion served no practical purpose in a nuclear weapons program? The goal of monitoring systems is to ensure that the yield of a successfully concealed nuclear test explosion would have to be so low that the test would lack military utility.

In the 1950s President Dwight D. Eisenhower was willing to agree to a comprehensive test ban even if monitoring was not sensitive enough to detect explosions with yields less than a few kilotons. Today monitoring is much more effective. Is the CTBT worth scuttling if a nuclear device of less than a kiloton might in principle be exploded without being detected? The 2002 analysis by the National Academy of Sciences argues that, on the contrary, ratifying the CTBT would be a positive development for U.S. national security.

Nevertheless, some leaders in the military and in nuclear weapons laboratories have opposed the CTBT. They argue that it prevents the U.S. from verifying the continuing viability of its current nuclear arsenal or from developing more sophisticated nuclear weapons. But the reliability of proven designs in the U.S. stockpile of nuclear weapons does not, in practice, depend on a program of nuclear test explosions. Rather reliability is ensured by nonexplosive testing that is not restricted by the CTBT. As for new nuclear weapons, the CTBT is an impediment—just as it was intended to be—and its restrictions on U.S. weapons development must be weighed politically against the merits of the restrictions it imposes on all signatories.

Our discussion has touched on several important technical issues related to weapons development and monitoring that arise as the U.S. judges whether ratifying the CTBT is in the national interest. Unfortunately, individuals and organizations with strong opinions on the CTBT have sometimes turned such issues—the assessment of monitoring capability in particular—into a surrogate battleground for an overall political evaluation of the treaty itself and the trade-offs it implies. We would urge, instead, that the main debate focus on the merits of the treaty directly and remain separate from technical, professional reviews of monitoring capability.

If the CTBT finally does go into force, the de facto moratorium on international testing would become formally established. The treaty could then become what it was always intended to be: a vital step in strengthening global efforts to prevent the proliferation of nuclear weapons and a new nuclear arms race.

Note: This article was originally printed with the title, "Monitoring for Nuclear Explosions".