Any citizen with even a casual awareness of the public
debate over nuclear power is familiar with the usual talking points, pro and
con, regarding this issue: safety, costs, environmental impacts, etc.
I will not burden the reader with a rehash of these familiar issues.
Instead, I propose to enrich the debate with some issues
with which the general public might be less familiar, all of which issues
lead strongly to the conclusion that electric power generation from nuclear
reactors should be phased out with deliberate speed and the technology
abandoned -- permanently.
This essay consists of three sections: First of all, the
recent disaster at the Fukushima nuclear plant in Japan urgently brings the
science of plate tectonics into the debate, and raises the question of
whether the promoters of nuclear power are willing and able to take the
long-term implications of that technology into consideration as they select
sites for these facilities.
In the second section, we ask whether it is possible to
accurately and reliably assess the safety of nuclear reactors. A
failed attempt to do so thirty years ago suggests that such an assessment is
impossible, not simply because of a lack of scientific knowledge and
technological capacity, but more fundamentally, because of the
insurmountable inability to anticipate all possible circumstances that might
occur in the operation of the plant.
Finally, these and other considerations lead to the
conclusion that nuclear power is not economically viable and sustainable
without massive government subsidies that are unavailable to its competing
Fukushima: A Disaster Waiting to Happen
What were Tokyo Electric Power Co. (TEPCO) and General
Electric thinking when they decided to site the world’s largest nuclear
power complex at Fukushima, on the eastern coast of Northern Japan?
Perhaps they weren’t thinking at all, or at least they
were thinking only for the short-term. Myopia is endemic to the corporate
mind, which is dedicated to an early return on investment. "In the
long-term," John Maynard Keynes famously remarked, "we’ll all be dead."
Nonetheless, a disastrous earthquake followed by a
tsunami was certain to happen along the eastern coast of Japan. Not a
question of if, but of when. That certainty was ordained by
the science of plate tectonics and validated in the geological record.
The sword of Damocles hanging over Fukushima is the Japan
Trench, located about 100 miles due east of and parallel to the coastline
where the plant is located.
The trench is a subduction zone, where the Pacific plate
dives down under the Okhotsk plate and into the mantle. The Japanese
islands, like the Marianas and the Aleutians, owe their very existence to
subduction which, as it grinds along, produces great earthquakes and
Tsunamis can be produced by volcanoes and landslides. But
they most reliably occur along subduction zones, as the ocean floor during
an earthquake is suddenly and violently jolted, causing a pulse of water to
move outward and perpendicular to the fault line. The Indonesian tsunami of
December 26, 2004, which killed almost a quarter of a million people, was
caused by a magnitude 9.1 earthquake along a subduction zone about 100 miles
west of Sumatra. Among other noteworthy subduction quakes/tsunamis are the
"Good Friday" Alaska earthquake in 1964 (magnitude 9.1), and the Chilean
earthquake of 2010 (magnitude 8.8).
And so, because the Japan Trench is parallel to the coast
of northern Japan, the tsunami was aimed directly at that coast.
Because of the dynamics of plate tectonics,
earthquake/tsunamis are endemic to Japan. For example, in 1923 a magnitude
eight earthquake struck central Japan, leveling the city of Yokohama and
destroying more than half of Tokyo, at the cost of about 100,000 lives.
The investors of the Fukushima plant knew all this, and
yet they went ahead and built a facility that was designed to withstand a
magnitude seven earthquake. (The Richter magnitude scale is not linear, it
is logarithmic. Accordingly, the energy released in a magnitude nine quake
is not two-ninths greater than that of a magnitude seven. It is about a
thousand times greater). TEPCO continued to operate the facility,
despite warnings from the International Atomic Energy Commission.
To put the matter bluntly, the investors and designers of
Fukushima gambled that during the operational lifetime of the plant, there
would be no earthquake greater than magnitude seven. They gambled, and the
people of northern Japan lost. Economists call this loss an
In California two commercial nuclear power facilities, at
San Onofre between San Diego and Los Angeles and at Diablo Canyon near San
Luis Obisbo, are located along the Pacific coast and near seismically active
faults. As a resident of southern California, I must wonder if the
operator of that plant Southern California Edison, like TEPCO in Japan, is
likewise gambling with my life and the lives of my neighbors. Heads
they win, tails we lose.
And earthquakes and tsunamis are not the only, or even
the greatest, threat posed by nuclear power reactors. The Three Mile Island
accident was caused by a mechanical failure, and the Chernobyl disaster was
caused by human error.
Building a nuclear power complex along a shoreline
opposite a subduction zone is risky. That fact is a "known known." How
risky? That is an unknowable unknown. Any attempt to assess the risk, or for
that matter the risk associated with any and all nuclear power plants, is
almost certain to underestimate that risk. A reliable and accurate
assessment of the risk of a failure of a nuclear power reactor is
unobtainable, now and forever.
These are bold assertions that I will endeavor to
demonstrate below. To do so, we will examine an ambitious and massive
attempt, some thirty years ago, to assess the safety of nuclear power
plants, and its subsequent spectacular failure to achieve that objective.
Because the reasons for that failure remain valid today, this is a tale well
worth retelling in the light of the disaster at Fukuyama and in the face of
the determination of the Obama Administration, despite that disaster, to
proceed with the construction of the first new nuclear power plants in
Reactor Safety: The Rasmussen Report Revisited
Concerned about public criticism of their nuclear energy
ambitions, the promoters of commercial atomic energy at the Atomic Energy
Commission (AEC) initiated in 1972, the "Reactor Safety Study," which was to
become known as "The Rasmussen Report," after its Director, Norman Rasmussen
of the Massachusetts Institute of Technology. In August, 1974, the draft
Report was released with much fanfare in a public-relations extravaganza
that prompted one newspaper to proclaim: "Campaigners Against Nuclear Power
Stations Put to Rout." Following this triumphant entrance, scrupulous
scientific assessment began behind the facade, after which it was all
downhill for the Report. The AEC's successor organization, the Nuclear
Regulatory Commission (NRC), quietly withdrew endorsement of the Rasmussen
Report in January, 1979.
Rushed into print to provide support for a renewal of the
Price Anderson Act (a federally mandated limit of industry liability
following a nuclear reactor failure), an eighteen page "Executive Summary"
of the final Report was distributed to Congress and the Press in October,
1975, and in advance of the release of the full, 2300 page Report.
Perhaps the most famous item of the Executive Summary was
the claim that the chances of being killed by a nuclear power plant
"transient" is about equal to that of being killed by a meteorite. This
mind-catching statistic has proven to have a longevity far exceeding that of
the Report which spawned it. In general, the Summary concluded that
... The likelihood of reactor accidents is much
smaller than that of many non-nuclear accidents having similar
consequences. All non-nuclear accidents examined in this study,
including fires, explosions, toxic chemical releases, dam failures,
airplane crashes, earthquakes, hurricanes and tornadoes, are much more
likely to occur and can have consequences comparable to, or larger than,
those of nuclear accidents.
Closer examination revealed a startling discrepancy
between the cheerful reassurances of the Executive Summary and the nine
volumes of technical information. In his splendid book, The Cult of the
Atom (Simon and Schuster, 1982), based upon tens of thousands of pages
of AEC documents pried loose by the Freedom of Information Act, Daniel Ford
As one moves from the very technical material ... to
the Executive Summary ... a change of tone as well as of technical
content is evident. In the "back" of the study, there are cautionary
notes, discussion of uncertainties in the data, and some sense that
there may be important limitations to the results. The qualifications
successively drop away as one moves toward the parts of the study that
the public was intended to see. In the months following the study's
completion, the honesty of the official summary ... became the most
The reassuring conclusions of the Rasmussen Report were
based upon numerous highly questionable assumptions and methodologies. Among
By definition, the report estimated damage and
casualties due to anticipated events. There is no clear acknowledgment
that all possible significant events were not, and could not be, covered
by the study. As it turned out, the near-disaster at Three Mile Island
was just one of several "unanticipated" events. And as noted above, a
magnitude nine earthquake was not anticipated by the designers of the
In fact, whole categories of failures were excluded
from the risk estimates. For example, it was assumed that back-up safety
systems would always operate in case of the failure of a primary system.
Given this assumption, the risk of a catastrophic accident would be the
product of the probability of the independent failure of both systems,
and thus highly unlikely. However, this discounted the possibility of a
"common-mode failure," such as that at Browns Ferry, Alabama, in 1975
(soon after the release of the Report), where, due to faulty design, an
accidental fire disabled both systems at once -- yet another event
excluded by the Rasmussen rules. Similarly, the Japanese earthquake and
tsunami of March 11, 2011 disabled both the primary and backup safety
systems at the Fukushima facility.
The Report focused on mechanical and equipment
failures, and discounted design flaws and "human error," as if these
were in some sense insignificant. Also overlooked was the possibility of
sabotage and terrorism.
The report adopted the so-called "fault-tree" method
of analysis, described by the Report as "developed by the Department of
Defense and NASA ... [and] coming into increasing use in recent years."
Not so. As Daniel Ford reports, "long before [Rasmussen] adopt the
fault-tree methods ... the Apollo program engineers had discarded them."
 As a retired professor of engineering recently explained to me:
"the simulation or probability tree ... analyses ... are used to locate
the weak links in your design, given the possible sources of failure
that you know of or can specify... [However, the analyses] are not
meant to yield a credible probability of failure, but instead yield at
best a lower bound for that probability." (EP emphasis)
The "probabilities" assigned to the component
"events" in the "fault tree," leading to a hypothetical failure, were
based upon almost pure speculation, since, because the technology was
new, the evaluators lacked any precedents upon which base probability
assessments. (Both Rasmussen himself, and his Report, admitted as much).
(Ford 138, 141). Thus, because the Report was fundamentally an advocacy
document, this gave its pro-nuclear investigators the license to concoct
unrealistically low risk assessments.
These "low risk estimates" in the Executive Summary
were startling, to say the least: "non-nuclear events," it claimed, "are
about 10,000 times more likely to produce large numbers of fatalities
than nuclear plants." But the footnote to this statement gave it away,
when it added that such "fatalities ... are those that would be
predicted to occur within a short period of time" after the accident.
However, few fatalities due to radiation exposure are "short-term." In
fact, as Physicist Frank von Hipple pointed out, a careful reading of
the voluminous technical material would disclose that for every ten
"early deaths" conceded in the Summary, the same accident would cause an
additional seven thousand cancer deaths. (Ford, 170) This was only one
of the several scandalous discrepancies between the "public" Executive
Summary and the Technical material in the Report, which led Morris
Udall, then Chair of the Subcommittee on Energy and the Environment, to
demand a new Executive Summary. The NRC refused.
The "peer review" of the Report was perfunctory at
best. The reviewers were given eleven days to assess an incomplete 3,000
page draft report -- a schedule virtually designed to yield invalid
assessments. Even so, many of the referees returned withering
criticisms, especially of the statistical methods employed by the
studies. The findings of this review group were not released by the AEC
or the NRC, and the published Report was unaltered by these criticisms.
These and numerous other flaws in the study led one
critic to wryly comment that "the chance of the Rasmussen Report being
substantially wrong is somewhat more than the chance of your being hit by a
Though the general public was much impressed by the
public relations show orchestrated by AEC, informed professional
investigators immediately began the erosion of credibility. Among these were
the Bulletin of the Atomic Scientists, the Union of Concerned Scientists,
and, most significantly, an independent panel set up by the American
Physical Society and chaired by Harold Lewis of the University of
California, Santa Barbara. Each of these returned severe criticisms of the
All this bad news eventually led the Reactor Safety Study
into the halls of Congress. Daniel Ford describes what followed:
In some cases [congressional] members and staff
probed the issues [of reactor safety] carefully, prepared detailed
follow-up reports, and tried to bring about needed reforms. Congressman
Morris Udall's Subcommittee on Energy and the Environment, for example,
held extensive hearings on the validity of the Reactor Safety Study. His
protests about the misleading manner in which the report's findings were
presented to the public forced the NRC, in January 1979, to repudiate
the results of the study. (p. 226)
And so, at length, the relentless discipline of science
and scholarship, combined with a rare display of uncompromising
congressional oversight investigation, brought about the downfall of the AEC/NRC
"Reactor Safety Study."
The NRC's "withdrawal of endorsement" stood in stark
contrast to its release, scarcely four years earlier. This time there were
no publicity releases, media interviews or press conferences. It was hoped
that the announcement would go unnoticed amidst the usual gross output of
news out of Washington. Given the widespread public opposition to nuclear
power, this expectation was bound to be frustrated.
In the end, the Rasmussen Report was yet another attempt
at justification of "the peaceful atom" which backfired on the proponents.
Historians looking back on this technological extravaganza may note, with
some bewilderment, that however severe the attacks by the critics,
commercial nuclear power was, in this case at least, inadvertently done in
by its defenders.
Nuclear Power Fails the Free Market Test.
Still more substantial objections to nuclear power have
been raised by scientists and engineers much more qualified than I am. So I
will not repeat them here. (To read these objections, google "Physicians for
Social Responsibility," "Union of Concerned Scientists" "Natural Resources
Defense Council" and "The Rocky Mountain Institute"). However, in closing, a
few additional concerns are worthy of mention.
(1) First of all, every source of electric power, with
the exception of nuclear power, "fails safe." A failure at a coal-fired
plant would, at worst, destroy the plant. But the damage would be localized
and short-term. Failures at a wind-farm or solar facility are trivial.
However, the damage caused by a nuclear meltdown and radiation release
endures for millennia and can render huge areas permanently uninhabitable, as
they have in Ukraine and Belarus due to the Chernobyl disaster, and as they
likely will in Japan following the Fukushima catastrophe.
(2) Nuclear industry assurances as to the safety of their
facilities are flatly refuted by their unwillingness to fully indemnify the
casualty and property losses that would result from a catastrophic release
of radiation from a nuclear accident. Since 1957, the Price Anderson Act has
set a limit on the liability that private industry must pay in the event of
an accident. The amount of that limit, originally $560 million for each
plant, has been routinely revised, so that as of 2005 the limit is now $10.8
billion for each incident. Clearly, the Fukushima disaster will exact a cost
far exceeding that amount. Were such an event to occur in the United States,
the cost of such a disaster in excess of ten billion would be born by the
victims and by the taxpayers. The contradiction is stark: the nuclear
industry and its enablers in the NRC tell the public that nuclear energy is
safe. And yet, at the same time, they are unwilling to back up these
assurances with a full indemnification of their facilities.
(3) The public has not been adequately informed of the
ongoing hazards of nuclear power. For example, the Union of Concerned
Scientists report that in the past year, there were fourteen
"near misses" among the 104 nuclear plants operating in the United
States. And according to the Washington Post (March 24),
the Nuclear Regulatory Commission has disclosed that "A quarter of U.S.
nuclear plants [are] not reporting equipment defects."
(4) The widely-heard claim that "nobody in the United
States has ever died due to commercial nuclear power" utilizes
"the fallacy of the
statistical casualty." Specific cancer deaths due to artificial
nuclear radiation are, of course, indistinguishable from cancer deaths due
to other causes. Yet epidemiological studies show, beyond reasonable doubt,
that some deaths are attributable to artificial radiation. The inference
from "no identifiable specific deaths" to "no deaths whatever" is fallacy
made infamous by the tobacco industry’s successful defense against suits
filed by injured smokers or their surviving families.
(5) The claim that nuclear power is the "safest" source
of energy commits the "fallacy of suppressed evidence." Such a claim
pretends that the risk of nuclear power is confined to the radiation risks
adjacent to a normally operating plant and immediately following each
"event." Usually excluded from such assessments are deaths and injuries
involved in the mining, milling, processing, shipment, reprocessing, storage
and disposal of fuel -- in short, the entire "fuel cycle."
(6) Similarly, the claim that nuclear power is the
"cheapest" power available is likewise based upon "the fallacy of suppressed
evidence." Specifically, nuclear proponents arrive at this conclusion by
"externalizing" (i.e., failing to include) such costs as government
subsidies for research and development, the costs of disposing of wastes,
the cost of decommissioning of facilities, and, again, the cost of risks to
human life, health and property. As noted above, the risk factor is excluded
due to the Price Anderson Act and the failure to acknowledge "statistical
casualties. Once all these "externalized costs" are included,
nuclear power adds up to the most expensive energy source, hands down.
Over fifty years of industry research, development and operation have not
altered this fact. Meanwhile, as R & D of alternative energy sources
progress and economies of scale kick in, the costs of solar, wind, tide,
geo-thermal and biomass energy continue to fall. (See UCS,
"Nuclear Power Subsidies: The Gift that Keeps on Taking," and Amory
"With Nuclear power, ‘No Acts of God Can Be Permitted’").
Because of considerations such as these, no nuclear
plants have been commissioned since the completion in 1985 of the Diablo
Canyon facility along the central coast of California. The Obama
Administration is prepared to change all this, as the President has
announced $8 billion in federal loan guarantees to allow the building of the
first nuclear power plant since Diablo Canyon.
Without this "federal intervention," along with the
Price-Anderson liability cap, no new nuclear plants would be built. The
"free market" would not allow it. And yet there are no conspicuous
complaints from the market fundamentalists on the right.
Why am I not surprised?
PostScript: My involvement in the Diablo Canyon
controversy goes way back. In 1981, a group of local citizens blockaded the
Diablo Canyon construction site in an act of civil disobedience, for which,
predictably, they were arrested. At the time, I was a Visiting Associate
Professor of Environmental Studies at the University of California, Santa
Barbara. The defense team asked me to testify as to the "reasonableness" of
the protesters’ belief that the Diablo Canyon nuclear reactors posed a
significant danger to their community and to themselves. The prosecution
objected on the grounds that the defense was asking me to "do the jury’s
work." The judge concurred, and so I was not permitted to testify. My
account of this experience and critique of the ruling may be found in my
"A Philosopher’s Day in Court" at my website,
The Online Gadfly, The discussion
above of the Rasmussen Report is a revision of my unpublished class
discussion paper from 1980,
"The Strange Saga of
the Rasmussen Report".
Copyright 2011 by Ernest Partridge