Public, Expert, and Activist Perceptions
of the Plutonium on Board
the
Cassini-Huygens Mission
presented to the:
American Association for the Advancement of Science
Anaheim, CA, January 1999
Center for Hazards
Research
California State University
Chico, CA 95929-0425 *
Abstract
In late 1997, NASA launched the Cassini orbiter and Huygens probe combination
to Saturn and its moon, Titan. Because of the distance of Saturn from the sun
and the mass of the spacecraft (near the launch limit of the Saturn
IV/Centaur), NASA opted for compact plutonium dioxide radioisotope
thermoelectric generators (RTGs) for instrument power and heating needs. The
safety of the plutonium erupted into public controversy by 1995, leading to
concerted efforts to cancel the launch and, now, to abort the Earth flyby of
August, 1999. This study focuses on the social construction of this
controversy. Anticipated contributions include analysis of a important case
study in technological risk perception, assessment, and management.
Additionally, the study offers the theoretical contribution of linking media
criticism literature to the shaping of public and official hazard perceptions
by activist media campaigns. It delineates the media interplay between NASA's
risk assessment experts and Cassini activists, pro and con. It goes on to
examine the effects of this interplay on public hazard perceptions. The
ongoing study then assesses resulting political pressures reported by elected
officials with risk management decision-making and oversight responsibilities
toward NASA. The nature and scale of future American space initiatives may
well be transformed by the outcome of this particular risk assessment/risk
management debate.
Prior Work in Hazard Perception
Lay perceptions differ markedly from expert perceptions, e.g., exaggerating
certain hazards and trivializing others far from the expectations of risk
assessors. This is the case with Cassini, with the assertion by NASA that the
risk of plutonium exposure from launch or flyby accidents is negligible and
its opponents' claim that NASA is covering up the extent of the risk.
People often make up their minds about an issue before seeking facts about it,
often taking the position of a reference group they trust, and then become
very confident in their opinions. Once the pattern gels one way or the other,
new facts and arguments are fit into the framework in a way that further
solidifies it, to avoid the cognitive dissonance of holding two conflicting
interpretations. This concept will be evaluated in the responses of opponent
activists to NASA's risk assessment and proponents' reactions to opponent
deconstruction of that assessment.
The public is often characterized, perhaps unfairly, as irrational and
ignorant: It may be that laypeople judge hazards along multiple axes, not
just the quantifiable probability of mortality/morbidity. One implication is
that experts may have narrow, faulty perceptions of their own.
- People seem to judge hazards differently on the basis of the perceived
control they have over the exposure: If they can choose the exposure, they
often will accept substantial risks; but, if this is a risk they feel have no
choice in, they may become very upset over the imposition of even the smallest
risk. Cassini may be perceived as a risk imposed on the public.
- Another axial theme in technological hazards literature is lay sensitivity
to fairness issues, to learning who gains and who loses from the deployment of
a risky technology. With Cassini, the public may see the benefits flowing to
"nerdy" scientists and the risks falling on average citizens.
- A prominent dimension consistently emerging in technological hazards
literature is the dread factor: Is a given hazard seen as having the
potential, no matter how small, of creating a really huge loss of life or
particularly fearful diseases or can it have effects that might be passed down
through the generations? Is it linked with past incidents of sheer horror?
Anything nuclear evokes dread, and the opposition to the plutonium dioxide on
board Cassini may be driven more by this single factor than any other.
- Another issue occasionally raised in hazards literature is public trust of
institutions responsible for risk assessment, risk management, and emergency
response. The proposed study will attempt to evaluate the extent to which
suspicion of NASA itself has inspired mistrust of its process of risk
assessment in the case of Cassini.
Much of this hazards perception literature is dominated by a focus on
individual perception and behavior largely bypassing structures within the
whole and their evolution. The structuring of the public and the social
reproduction of an activist vanguard will be one contribution of the
study.
Prior Work in Risk Assessment and Risk Management
Definitions:
- Risk assessment specifies hazards to humans, generally in terms of the
expected probabilities of given types and magnitudes of damage.
- Risk management is the development and implementation of policy to
minimize hazard.
The ideological orientation and political milieu of risk managers can slant
risk assessment:
- Anti-regulatory sentiment among risk managers can demand they hear only
the epistemologically most defensible science. This enables an attitude of
denial or psychological minimization of a potentially risky situation, which,
therefore, raises the probability of Type I failures (e.g., NASA approval of a
launch that fails).
- Conservative risk sentiment among risk managers can demand hearings for even
the least defensible extrapolations, in order to err on the side of safety,
which then raises the probability of Type II failures (e.g., the costly
scrubbing of a launch that would safely have resulted in a significant
enhancement of scientific and technological knowledge).
- Only risk managers can decide whether Type I or Type II errors have the graver
consequences morally and politically, and it is this sort of policy choice
that determines whether the epistemological problems inherent in risk
assessment delegitimate its findings or not.
Given the natural interest of the NASA Cassini team in its own mission,
then, could risk assessment performed for it err on the side of minimizing the
regulatory burden on NASA, thus lessening emphasis on the consequences of a
Type I failure?
Prior Work in Media Criticism
Media are often criticized for sensationalizing hazard stories, which can
inflate public concern about minimal risks. This is a criticism echoed within
the NASA Cassini team.
Some media observers hold that media should report on breakdowns in
institutional protections for society and provide a forum for debate on public
issues that might not be well described by official statistics. Opponents
argue that NASA's process of risk assessment resulted in statistics designed
to protect the mission more than the public.
A large body of generic media criticism identifies filters that bias media
selection of newsworthy items from the chaos of daily events. Most often cited
are capital concentration in media and media dependence on advertising.
- Capital concentration is said to limit critical public debate on issues
involving parent corporations. This filter has already been invoked in the
anti-Cassini literature in claims that NBC and CBS may not have covered
stories on Cassini's plutonium load because their parent corporations have
ties with the nuclear industry.
- Dependence on advertising is held to limit critical discussion of anything
that might upset the income flow of the advertisers. Conceivably, coverage of
anti-Cassini activities could divert nuclear and aerospace companies'
advertisements in the offending media, thus encouraging an attitude of
editorial restraint.
A unique contribution of the proposed study is its integration of such
media criticism with hazards. The focus will be on the relationship between
the media and public, expert, and activist perceptions of the plutonium aboard
Cassini-Huygens and the resulting pressure on policy-makers. Past work on
media and hazards has not theorized this relationship and the pressure it
generates as necessary components in analysis of the relationship between risk
assessment science and risk management policy.
Toward a Richer Model of the Risk Assessment/Risk Management
Relationship
The risk assessment and management literature has conceptually differentiated
these two functions and paid much attention to political ideologies of
regulation or laissez-faire as influences on their relationship. This
project will specify critical channels along which ideology and politics
impact that relationship by bringing in media analysis and the recruitment of
individuals (through their mediated perceptions), into structured pressure
campaigns.
These channels of political influence run through and among three complex
categories of interacting players:
- The Federal government
has both risk assessment and risk management
responsibilities towards its public distributed in its various branches,
departments, and agencies.
- The public itself
, for the sake of which Federal risk assessment and
risk management are conducted, holds at least latent responsibility for the
ideological milieu within the Federal government. This responsibility is
actualized through individual voting and willingness to participate in risk
management decisions through varying levels of activism.
- The private sector
comprises a multiplicity of elite influences on
Federal government risk management responsibilities and, through media and
employment, on the public.
In a controversy of the sort examined here, each of these three categories of
players can be further subdivided along lines of internal tension as
structures in contradiction.
-
The Federal government
- includes both the organs of governance (e.g., Congress), which sets and
reviews policy, and agencies constituted to carry out policy (e.g., NASA).
Agencies' desire to pursue their missions might create a bias towards
downplaying any technological risk implicit in their missions during risk
assessment. This might show as an emphasis on epistemological rigor, tacitly
favoring the minimization of a Type II error. Minimizing Type II errors
raises the probability of Type I failures (e.g., a catastrophic launch).
NASA subcontracts risk assessment to outside institutions to offset this
tendency. Does "farming out" of risk assessment, in fact, adequately
safeguard against this?
-
The public
- is often characterized as ignorant of the technical issues involved in any
given debate and unengaged in it. A very few people will learn enough about a
given hazard to become committed activists on one side or the other. Some
portion of these activists may be persons who easily adopt causes:
activism-oriented personalities. Others may become involved in a single issue
because of important personal impacts. The influence exerted by the activated
public reflects elected officials' perception of how wide-spread activist
sentiment is and how that sentiment may affect voter turnout. This project
will trace the impacts of activist recruitment on elected risk management
policy-makers, as they face their own political risk assessment dilemmas.
-
The private sector
- includes private concerns that enter the arena of technological risk as
deployers of hazardous chemicals and technologies, and some of these contract
for Federal agencies, including NASA. Certain of these own media
subsidiaries, which enter the debate with their own interests in sensation,
drama, and access to elite decision-makers, which can affect public perception
of hazards issues.
Needed now are:
- an understanding of the process by which public perceptions trigger
activist recruitment
- an exploration of how activists deal with the cognitive dissonance of risk
assessment results at variance with their own beliefs
- an analysis of the frameworks by which politicians assess the resulting
political hazards (risk assessment for political hazard?).
- to what extent do politicians' assessments of the political risks of
acting or not acting in response to constituent communications affect their
ideologies of the reasonable balance between minimizing Type I and Type II
errors in managing inherently uncertain technological hazards?
- how is that sense of proper balance communicated to and understood by
agency personnel as they compete to propose mission goals and designs and then
establish risk assessment guidelines?
- does agency direct subcontracting of risk assessment effectively protect
risk assessment from agency and Congressional biases?
- a model of how media are used by parent corporations, agencies, and
activists to the media affect public opinion, activist recruitment, risk
management policy-making, and ultimately risk assessment.
For full documentation and to read the proposal on which this presentation is
based, please visit:
http://www.csuchico.edu/~lapaloma/nsfcass2.html
*
Christine M. Rodrigue
crodrigue@oavax.csuchico.edu *
* Note: the author has moved. The new
contact information is:
Christine M. Rodrigue
Department of Geography
California State University
Long Beach, CA 90840
(562) 985-4895
rodrigue@csulb.edu
https://home.csulb.edu/~rodrigue/
http://www.jps.net/rodrigue/nsfcass2.html
© Christine M. Rodrigue, Ph.D., 1999
Maintained by author
Last revised: 02/25/00