SECTION C:  PROPOSAL DESCRIPTION

SOCIAL CONSTRUCTION OF TECHNOLOGICAL HAZARD: 
PLUTONIUM ON BOARD THE CASSINI-HUYGENS SPACECRAFT
(narrative for a proposal resubmitted to the 
Decision, Risk, and Management Science Program
National Science Foundation
14 January 1999)


Christine M. Rodrigue, Ph.D.            
Center for Hazards Research                  (530) 898-4953 -- FAX -6781   
California State University                  crodrigue@oavax.csuchico.edu  
Chico, CA 95929-0425                         rodrigue@jps.net (attachments)



The proposed study focuses on the social construction of plutonium hazards 
aboard the Cassini-Huygens mission to Saturn and its moon, Titan.  Cassini-
Huygens culminates the vision of the Joint Working Group (of the Space 
Science Committee of the European Science Foundation and the Space Science 
Board of the American National Academy of Sciences) to "...study possible 
modes of cooperation between the United States and Europe in the field of 
planetary science" (Spilker 1997).  The Cassini-Huygens mission to the 
Saturn planetary system is physically the largest, scientifically the most 
ambitious, and organizationally the most international project ever 
undertaken by the National Aeronautics and Space Administration (NASA) or 
by its partners, the European Space Agency (ESA) and the Agenzia Spaziale 
Italiana (ASI).  NASA and Jet Propulsion Laboratory (JPL) were responsible 
for the Cassini orbiter and for launch, the ESA for the Huygens Titan 
probe, and the ASI for the high gain antenna and various instruments.

Due to the sheer mass of the Cassini Saturn orbiter and Huygens Titan probe 
combination, the duration of the cruise to Saturn (seven years) and tour at 
Saturn (four years), the orbiter's repeated movement through the ring 
system, the need to minimize moving parts that can fail, and the extreme 
distance of Saturn from the sun (1.4 billion kilometers), NASA dismissed 
solar power for mission instrumentation and temperature maintenance needs.  
Instead, it decided on the compact radioisotope thermoelectric generator 
(RTG) and radioisotope thermal unit (RHU) design.  The RTGs and RHUs 
generate heat and, in the case of the RTGs, electrical power through the 
alpha radiation emitted by ceramicized plutonium-238 dioxide.   Because the 
5,655 kg mass of Cassini-Huygens and its navigational fuel is close to the 
5,760 kg launch limit of the largest American expendable launch vehicle 
(Titan IV/Centaur combination), NASA further opted for a Venus-Venus-Earth-
Jupiter Gravity Assist (VVEJGA) trajectory to give Cassini-Huygens the 
velocity it needed to reach the Saturn system during the careers of its 
science teams (NASA 1995, 1992; Spilker 1997).  The plutonium and the 
VVEJGA exploded into controversy by 1996, however, resulting in concerted 
efforts to stop or postpone the October 1997 launch and, now, to abort the 
earth flyby in August of 1999.  The purpose of the proposed study is to 
analyze (1) expert and activist perceptions of the plutonium hazard aboard 
Cassini and of the risk assessment done for the mission; (2) the 
representation of this controversy in the media; (3) the recruitment of 
activists from the larger public; and (4) the linkages between this 
controversy and public policy towards technological risk management, 
especially in the space program.  


PRIOR WORK

This controversy speaks to the concerns of at least three distinct 
literatures.  These include the relationship between risk assessment and 
risk management, the  perception of hazard, and media criticism.


Risk Assessment and Risk Management  

Risk assessment specifies hazards to humans, generally in terms of the 
expected probabilities of given types and magnitudes of damage.  Risk 
management is the development and implementation of policy to minimize 
hazard.  The distinction, while clear in concept, is complex and 
contentious in application (Brown and Goble 1990; Cranor 1990; Harman, 
Harrington, and Cervey 1998).

Risk assessment entails the development and application of methods grounded 
in science to delimit probabilities of specified consequences to society in 
the event of hazard exposure.  To come up with defensible and replicable 
results to inform risk management, risk assessment experts commonly focus 
on easily quantified measures of hazard exposure, such as expected  
mortality and morbidity rates (Hohenemser and Kasperson 1982; Shrader-
Frechette 1995).  These measures, while explicitly prioritizing human life 
and health, do not thereby automatically confer social legitimacy on risk 
assessment even so (Berglund 1998; Douglas and Wildavsky 1982; Jasanoff 
1991; Shrader-Frechette 1995).  Attempts are often made to specify economic 
or ecological losses as well, in the again quantifiable cost-benefit or 
cost-effectiveness framework, with equally contested results (Douglas and 
Wildavsky 1982; Harman, Harrington, and Cervey 1998; Morgall 1993; Palm 
1990).

Risk assessment entails a number of unavoidable epistemological and 
communications problems.  These are particularly acute in the case of 
technological, as opposed to natural, hazards.  The epistemological 
problems include sampling issues, cross-species and dose extrapolation 
issues, and issues of control over confounding variables in epidemiological 
and prospective studies on human groups (Cranor 1997, 1990; Giere 1991; 
Harman, Harrington, and Cervey 1998; Henderson-Sellers 1998).

The unavoidable uncertainty in risk assessment and the dissension it breeds 
among experts can result in problems communicating findings to risk 
managers and policy makers (not to mention the public), particularly if the 
latter do not have a strong grounding in science and statistics (Breyer 
1998; Clarke 1998; Friedman 1994; Henderson-Sellers 1998).  The choice of 
risk assessment presentation depends on the prior communication of policy 
preferences on the part of risk managers:  Do they prefer to err on the 
side of conserving human life and health at all costs, or do they prefer to 
err on the side of minimizing regulatory burdens on companies and agencies 
until risk assessment provides more accurate and precise answers  (Harman, 
Harrington, and Cervey 1998; Jasanoff 1991)?   

The ideological orientation and political milieu of risk managers can slant 
risk assessment (Heiman 1997; Mayo 1991; Silbergeld 1991).  Anti-regulatory 
sentiment among risk managers can demand they hear only the 
epistemologically most defensible science.  This enables an attitude of 
denial or psychological minimization of a potentially risky situation, 
which, therefore, raises the probability of Type I failures (e.g., the NASA 
approval of a launch that fails catastrophically).  Conservative risk 
sentiment among risk managers can demand hearings for even the least 
defensible extrapolations, in order to err on the side of safety, which 
then raises the probability of Type II failures (e.g., the costly scrubbing 
of a launch that would safely have resulted in a significant enhancement of 
scientific and technological knowledge).  Measures to conserve human life 
(minimize Type I errors) raise the probability of Type II errors, of 
foregoing the benefits of an action;  minimizing regulatory burdens 
(minimizing Type II errors) raises the probability of Type I errors, of 
tragic failure of a wrongly approved action (Heiman 1997; Mayo 1991; 
Shrader-Frechette 1998).  

Only risk managers can decide whether Type I or Type II errors have the 
graver consequences morally and politically (Heiman 1997; Hollander 1991; 
Mayo 1991).  It is this sort of policy choice that determines whether the 
epistemological problems inherent in risk assessment delegitimate its 
findings or not (Hollander 1991).  Creating a firewall between risk 
assessment and risk management to protect scientists from political 
pressures, however, poses the danger of risk managers being overwhelmed by 
risk assessment controversies they do not understand, concluding that all 
science is arbitrary, and cherry-picking those findings that rationalize 
policy choices formed independently of science (Clarke 1998; Mayo 1991; 
Silbergeld 1991).

Given the natural interest of the NASA Cassini team in its own mission, 
then, risk assessment performed for it could conceivably err on the side of 
minimizing the regulatory burden on NASA, thus lessening emphasis on the 
consequences of a Type I failure.  This possibility was anticipated and 
mitigated by NASA through a system of multiple external and independent 
reviews designed to uncover and rectify it (Dawson 1998).  Risk assessors 
at NASA were aware that there could be public controversy over the 
RTGs/RHUs, and the independence of the review system was meant to create 
defensible responses against that possibility.  Congress in its risk 
management capacity, however, seems to have been jarred by the advent of a 
well-organized and highly active movement among the public disputing the 
results of the risk assessment, claiming that the risk of a Type I failure 
was either inaccurately minimized or still unacceptably high (Weldon 1997).  
Now, long after the risk assessment was done, NASA finds itself struggling 
to defend its findings to a hostile risk management audience, both public 
and elected.


Hazard Perception

Hazards perception literature has developed a number of themes relevant to 
technological risk and, by extension, to the Cassini controversy.  Much of 
this literature focuses on individuals in a mostly undifferentiated lay 
public, often with an eye to improving public education on hazards.  
Perhaps the major finding is that lay perceptions differ markedly from 
expert perceptions, for example, exaggerating certain hazards and 
trivializing others far from the expectations of risk assessors (Douglas 
and Wildavsky 1982; Friedman 1994; Fritzsche 1995, 1996; Kasperson and 
Kasperson 1991; Palm 1990; Shain 1989; Slovic 1991).  With the assertion by 
NASA that the risk of plutonium exposure from launch or flyby accidents is 
negligible and its opponents' claim that NASA is covering up the extent of 
the risk, the Cassini controversy becomes a classic exemplar of the expert 
and lay division in perception.

Much effort has gone toward explaining these discrepancies between expert 
and lay perceptions.  A common interpretation is that the public is 
patently ignorant (Augustine 1998; Fischhoff 1994; Friedman 1994; Johnson 
1993; McGarity 1990; Shrader-Frechette 1990a).  Its involvement in 
decisions concerning technological risks is, therefore, essentially 
valueless, if not counterproductive.   This interpretation has been tested 
by comparing laypeople's estimates of the probabilities and mechanisms of a 
risk  with experts' estimates, the latter serving as a sort of baseline, 
which is presumed to be "objective."  Results have been contradictory, 
perhaps partly due to insufficient differentiation of the general public 
from the activist public.  In some studies, the more knowledge members of 
the public have about certain technologies (notably nuclear), the less 
concerned they are about them, yet activists against the technologies often 
are well-informed.  In other studies, for example, of groundwater 
pollution, greater levels of public understanding creates heightened 
opposition to a hazardous technology (Johnson 1993).  The ignorance 
hypothesis sometimes cited by exasperated experts, therefore, seems too 
contradictory to promote understanding of the opposition to Cassini.

Another line of inquiry has focused on the processes by which people 
process information in general, as they apply to hazards.  Fairly 
consistently, people tend to overestimate the frequency of low probability 
but dramatic hazards (e.g., nuclear power plant accidents or airplane 
crashes) as compared with expert estimates.  Similarly, they tend to 
underestimate high probability hazards that are less dramatically 
memorable, such as certain lifestyle-mediated diseases or automobile 
accidents (Slovic 1991).  Again, the general public and the activist public 
are not clearly distinguished.  Even so, this dynamic may be directly 
relevant to the Cassini controversy, since the chances of failure are 
miniscule according to risk assessment experts (less than one in a million, 
NASA 1995), but a failure, should one happen, would be extremely dramatic, 
politically, if not epidemiologically.
 
Another finding relates to conceptions of exposure.  There is some evidence 
that mediations between toxic releases and health effects are poorly 
understood in lay perception (Johnson 1993).  That is, release equals 
exposure equals injury.  This issue may be relevant to the Cassini 
controversy, in which opponents imply that Cassini has a large probability 
of hitting Earth or its atmosphere and that the plutonium on board would 
then vaporize into inhalable particles to which millions if not billions of 
people would be exposed.  Furthermore, plutonium is characterized as the 
single most dangerous substance known (Anderson 1997; Grossman 1995; 
Kalupson 1997), implying that those exposed to it are under sentence of 
cancer.  Estimates of fatalities in the opponent literature range from over 
200,000 (Kaku 1997) to 20,000,000 (Sternglass, cited in Anderson 1997 and 
Kalupson 1997).  The multiplicative reduction in risk probabilities at each 
point in the chain of events producing plutonium exposure seems lost in 
opponent literature.

Related to this is a finding that people often make up their minds about an 
issue before being exposed to an adequate array of facts and arguments 
about it, often taking the position of a reference group they trust 
(Johnson 1993), and then become very confident in their opinions (Margolis 
1996; Slovic 1991; Slovic, Fischhoff, and Lichtenstein 1982).  Once the 
pattern gels one way or the other, new facts and arguments are fit into the 
framework in a way that further solidifies it, in order to avoid the 
cognitive dissonance of entertaining two mutually exclusive 
interpretations.  That is, someone who decides that there is a significant 
risk in a situation or technology will dismiss data that counterindicate it 
as faulty or from a corrupt and self-interested source (Covello, Sandman, 
and Slovic 1991).  Someone who has decided that there is no significant 
risk will equally dismiss data suggesting the risk is greater with similar 
denial mechanisms.  

By shaping individual choice of those few among all possible hazards that 
deserve personal attention and concern, ideology and culture may account 
for some of the disparity between expert and lay (including activist) 
opinion.  Douglas and Wildavsky (1982) have proposed a division of any 
culture into a center subculture and a border subculture.  The center is 
the hierarchical core of social power and, according to Douglas and 
Wildavsky, tends to be rational and confident in the conventional protocols 
of risk assessment.  The border consists of those people peripheral to 
social power, suspicious of the center, fearful of corruption and pollution 
of the natural environment by the self-serving activities of the power 
elite, unimpressed by a rationality seen as serving central interests, and 
generally egalitarian in social values and oppositional in politics.  
Basically, expert opinion is centrist and lay opposition to technology is 
borderer in character, and the division between the two is yet another 
facet of a culture war of fundamental values.  This argument has drawn many 
criticisms.  First, there are many counterinstances between attributed 
center-border status and individuals' and organizations' actual behavior 
(Schrader-Frechette 1991).  Second, different typologies have been 
proposed, together with different psychological mechanisms linking typology 
with behavior (Margolis 1996; Fritzsche 1995). Third, Douglas' and 
Wildavsky's argument implies that real and significant risks that can and 
do kill people can be dismissed as mere social constructs rather like 
aesthetic judgments.  The center-border typology may be relevant to the 
Cassini controversy, and basic attributes of proponents and opponents will 
be collected for comparison with the expectations of the typology.

In defense of the public, there has been some intriguing research 
suggesting that the public may not be in fact so irrational and ignorant; 
rather, laypeople may simply be judging hazards along multiple axes, not 
just the quantifiable probability of mortality or morbidity (Covello 1992; 
Shrader-Frechette 1990b; Slovic 1991).  Perceived control is one of these 
other axes of judgment.  People seem to judge hazards differently on the 
basis of the degree of perceived control they have over the exposure:  If 
they can choose the exposure (driving fast, snowboarding, smoking), they 
often will accept substantial risks; but, if this is a risk they feel have 
no choice in, they may become very upset over the imposition of even the 
smallest risk.  Of importance here is learning whether loss of perceived 
control is a factor in instigating opposition to Cassini.

Familiarity is another closely related axis repeatedly emerging as 
important to lay response to a given hazard (Slovic 1991).  Familiar 
hazards are tolerable; if a hazard arises from an unfamiliar technological 
source, it seems to evoke more concern than mortality and morbidity 
statistics would seem to warrant.  Cars and cigarettes are tolerable;  
nuclear power, food irradiation, and recombinant DNA provoke tremendous 
concern.  Cassini entails, by definition, a plutonium technology, which by 
its very nature is not a familiar part of everyday environments.  

Another axial theme in technological hazards literature is lay sensitivity 
to fairness issues (Margolis 1996; Shrader-Frechette 1995).  The public 
seems concerned to learn who gains and who loses from the deployment of a 
risky technology and, especially, whether the gainers and the losers are 
the same people.  If a corporation gets the profits from a hazardous 
installation and people in a given neighborhood get the health risk, 
concern in the neighborhood well out of proportion to the statistical 
probabilities of  mortality and morbidity seems guaranteed.  The Not In My 
Back Yard (NIMBY) effect is one variation on this theme.  The relevance of 
this theme to the current controversy may turn on the perception that the 
benefits in human knowledge to be gained from Cassini may devolve mainly on 
"nerdy," elite scientists with their obscure and poorly understood research 
agendas, while the hazard of plutonium contamination may fall on average 
citizens and taxpayers in the event of a launch or, now, flyby accident 
(Meshkati 1997).  

A prominent dimension consistently emerging in technological hazards 
literature is sometimes called the dread factor:  Does a given hazard have 
the potential, no matter how small, of creating a really huge loss of life 
or particularly fearful diseases (e.g., cancer or AIDS), or can it have 
effects that might be passed down through the generations?  Is it linked 
with past incidents of overpowering horror?  If so, such a hazard evokes 
sheer dread and, even if the probability of an accident is negligible, it 
will create concern and agitation far beyond the probabilities of mortality 
and morbidity (Covello 1991; Slovic 1991).  Anything nuclear evokes dread, 
because the atomic age was ushered in by the appalling effects of the 
Hiroshima and Nagasaki bombings, because nuclear testing has been reported 
to result in cancer clusters near testing sites, and because plutonium has 
been popularly characterized as the single most dangerous substance known 
to humanity (Grossman 1995).   The opposition to Cassini may be driven more 
by this single factor than any other, an assertion that will be evaluated 
in the proposed study.

Another issue occasionally raised in hazards literature is public trust of 
institutions responsible for risk assessment, risk management, and 
emergency response (Douglas and Wildavsky 1982; Fritzsche 1995; Jasonoff 
1991; Kunreuther, Slovic, and MacGregor 1996; Margolis 1996).  Public trust 
of governmental institutions in general seems to have hit a decades-long 
slide.  This decline in trust has extended even to such agencies as the 
Federal Emergency Management Agency (FEMA) and NASA itself in the wake of 
conspiracy theorizing from both left and right and from a pervasive "X 
Files" mentality.  As entertaining and as easy to discount as the more 
extreme products of the popular culture are, they are relevant to the 
Cassini controversy in creating an atmosphere of mistrust if not actual 
fear of government agencies, very explicitly including NASA.  The proposed 
study will attempt to evaluate the extent to which suspicion of NASA itself 
has inspired mistrust of its process of risk assessment in the case of 
Cassini.

Much of this hazards perception literature is dominated by a focus on 
individual perception and behavior.  Such atomistic research poses the 
analytical hazard of an aggregative fallacy, an assumption that the whole 
(the public) is merely the sum of its parts (individuals).  Structures 
within the whole and their evolution are bypassed.  In contentious issues 
of technological hazard, the public differentiates itself into a more 
passive and uninvolved fraction and an activist component, highly engaged 
and organized.  The degree of overlap among activists on one issue and on 
others is unclear:  Is there an activism-prone personality that looks for 
such controversies and jumps in, or is there predominantly a pattern of 
activism by ad hoc activists?  The structuring of the public and the social 
reproduction of an activist vanguard will be one contribution of the 
proposed study.
 
If the claim by some researchers that the public, activist or uninvolved, 
may not be irrational is sound, one implication is that expert opinion is 
narrow and, in its own way, as distorted as public, including activist, 
opinion (Shrader-Frechette 1990a).  This is often expressed in the 
deconstructionist and postmodernist schools in many disciplines today, 
which basically delegitimate any claim for scientific objectivity 
whatsoever:  The whole enterprise is claimed to be riddled with emotional, 
cultural, and economic biases from one end to the other (Haraway 1990; 
Merchant 1990; Soja 1991).  While the present study can take on neither the 
whole postmodernist/deconstructionist agenda nor the philosophical 
justifications for the scientific method, it can critically evaluate NASA's 
risk assessment documents and the recollections of risk assessors for signs 
of tunnel-vision and one-dimensional privileging of expert opinion.  Such 
shortsightedness may have led to NASA's failure fully to address broader 
public concerns and thereby triggered recruitment of its own opponents to 
the current controversy over Cassini.


Media Criticism

Related to the theme of hazard perception, there have been a number of 
studies of media roles in constructing hazard awareness and tolerance.  A 
common criticism is of the sensationalism many media bring to hazard 
stories, which can raise public concern about minimal risks or can hamper 
efforts to respond to a disaster (Elliott 1989; Mazur 1998, 1994; Scanlon 
1989; Smith 1992).  Some efforts have been made to compare media coverage 
with objective measures of damage or danger (Rodrigue and Rovai 1995; 
Rodrigue, Rovai, and Place 1997; Sandman 1994; Singer and Endreny 1994; 
Smith 1992) or exhort media to bring their coverage closer to such measures 
of accuracy (Elliott 1989).  Others argue that such comparisons are not 
fair:  The media are not there faithfully to reproduce in print, radio, or 
images the exact probabilities or estimates approved by experts.  Rather, 
they are to provide helpful information for people to evaluate and reduce 
their risk (Mazur 1998).  In emergency situations, Quarantelli (1989) 
characterizes this risk education function as mass communication, as 
opposed to the simple reportage or critical investigation functions of 
interest here, which he designates as mass media.  Others concerned with 
the mass media functions argue that they are to report on possible 
breakdowns in institutional protections for people, and, most importantly, 
they are to provide a public forum or arena for debate on issues that might 
not be well encompassed by official statistics (Ledingham and Walters 1989; 
Peters 1994; Wilkins 1989).  This is relevant to the current controversy, 
because opponents often argue that NASA's process of risk assessment 
resulted in statistics designed to protect the mission more than the 
public.

Until now only sporadically linked to hazards literature is a large body of 
generic media criticism mostly targeted to an educated lay audience with 
progressive political sympathies (e.g., Bagdikian 1997; Cohen and Solomon 
1995, 1993; Faludi 1991; Gans 1989; Herman and Chomsky 1988; Lee and 
Solomon 1991; McChesney 1997; Philips 1997; Schechter, Browne, and 
McChesney 1997; Steinem 1990).  This body of literature identifies a 
variety of filters operating to bias media selection of newsworthy items 
from the chaos of daily events, of which the most often cited are capital 
concentration in media and media dependence on advertising revenue.

One of the most prominently cited of these filters is the intense capital 
concentration in the media.  It is argued that such capital concentration 
can limit critical public debate on issues involving parent corporations or 
others having close ties with the parent firms (Bagdikian 1997; Cohen and 
Solomon 1995, 1993; Dunwoody 1994; Herman and Chomsky 1988; Lee and Solomon 
1991; McChesney 1997; Stevens 1998).  This filter has already been brought 
up in the anti-Cassini literature with statements that NBC may not have 
covered Karl Grossman's stories, "Don't Send Plutonium into Space" (1996a) 
and "Risking the World:  Nuclear Proliferation in Space" (1996b), because 
its parent corporation is General Electric, which manufactures turbines for 
many nuclear reactors, and CBS may have failed to provide coverage, because 
it is owned by Westinghouse, which provides engineering services for over 
forty percent of the world's nuclear reactors (Florida Coalition for Peace 
and Justice n.d.).  This argument generates testable predictions of 
effects, which can be evaluated using the Cassini controversy.  Another 
posited effect of the capital concentration filter is that it may encourage 
sensational coverage likely to increase circulation and the profits of the 
parent corporations holding media subsidiaries (Herman and Chomsky 1988; 
Schechter, Browne, and McChesny 1997).  This criticism has been echoed in 
the hazards literature itself by a number of authors concerned about 
sensationalism, blame-seeking, and human drama in hazards coverage, 
whatever its origins (Johnson 1993; Mazur 1998; Peters 1994; Singer and 
Endreny 1994; Smith 1992; Stallings 1994). 

Some authors cite dependence on advertising revenue as a second filter on 
media coverage (Bagdikian 1997; Herman and Chomsky 1988; Steinem 1990).  
This dependence is held to constrain serious and critical discussion of 
anything that could upset the income flow of the advertisers.  Conceivably, 
coverage of anti-Cassini activities and arguments could affect the 
willingness of nuclear and aerospace companies and their subsidiaries from 
advertising in the offending media.  This is a testable expectation, in the 
event significant differences turn up among different media companies in 
the amount of coverage they afford Cassini.

A unique contribution of the proposed study is its integration of this 
progressivist body of general media criticism with hazards.  This 
integration focuses on the relationship between the media and public, 
expert, and activist perceptions of the plutonium aboard Cassini-Huygens.  
This complex interaction can result in pressure exerted on policy-makers by 
activists and experts, through changes that media coverage may produce on 
public perceptions and recruitment of activists.  Past work on media and 
hazards has not theorized this relationship and the pressure it generates 
as necessary components in the analysis of the relationship between risk 
assessment science and risk management policy.  


ELEMENTS OF A MORE COMPREHENSIVE MODEL OF RISK ASSESSMENT AND MANAGEMENT

The proposed project will bring elements from these three literatures into 
a more comprehensive model of the risk assessment and risk management 
relationship, particularly in the arena of technological hazards.  Prior 
work in hazard perception has detailed individual-level cognition of and 
behavior towards risky situations.  It has, however, generally avoided a 
more structural examination of organizations, classes, and interest groups, 
other than some work on the social construction of natural hazards in the 
developing world  (e.g., Blaikie, Cannon, Davis, and Wisner 1994).  This 
project will examine the process of structuration of activism around a 
technological hazard, specifically the process by which uninvolved 
individuals adopt positions and then the individual and societal influences 
that lead some of them to affiliate into organized groups or campaigns, 
there to constitute a perception dilemma for elected Congressional risk 
managers.

Progressive media criticism has certainly raised issues of corporate 
influence and control of media and made predictive statements of probable 
bias in media coverage of a variety of issues, including environmental 
issues.  Little work has explored the implications of this literature as 
they may manifest in hazards, however, which is one contribution of the 
proposed project.  

The risk assessment and management literature has conceptually 
differentiated these two functions and the tensions in their relationship 
inherent in their different foci and concerns.  Much attention has been paid 
to political ideologies of regulation or laissez-faire as influences on 
that relationship.  This project will focus on the specific channels through 
which ideology and politics impact that relationship by bringing in media 
analysis and the recruitment of individuals through their mediated perceptions 
into more or less structured pressure campaigns.

These channels of political influence run through and among three complex 
categories of interacting players.  The proposed project will model these 
players and their internal and external interactions in the technological 
hazard arena, using the controversy around Cassini.  Past work has 
developed certain sets of these relationships (e.g., risk assessors and 
risk managers within an agency, media coverage and the influences running 
between corporations and politicians, and media and individual perceptions) 
but they have not all been brought together in the area of hazard 
assessment and policy.  This framework will be a useful contribution of the 
proposed project and should prove fruitful in framing other hazards, both 
technological and natural.

The first of these internally fragmented players is the Federal government 
broadly conceived, which has both risk assessment and risk management 
responsibilities towards its public distributed in its various branches, 
departments, and agencies.  The second is the public itself, for the sake 
of which Federal risk assessment and risk management are conducted and 
which in a democratic society holds at least latent responsibility for the 
ideological milieu within the Federal government.  This responsibility is 
actualized through individual voting and willingness to participate in risk 
management decisions through varying levels of activism.  The third player 
is the private sector with a multiplicity of elite influences on Federal 
government risk management responsibilities and, through media and 
employment, on the public.  In a controversy of the sort examined here, 
each of these three categories of players can be further subdivided along 
lines of internal tension as structures in contradiction.  


The Federal Government:  Risk Assessment and Risk Management

First, the Federal government includes both the organs of governance, which 
sets and reviews policy, and agencies constituted to carry out policy.  
These latter are answerable to the branches of government for the 
performance of their designated missions and depend on the executive and 
legislative branches for funding.  In the area of technological hazard, the 
three branches of government are responsible for risk management, while 
such agencies and departments as NASA, EPA, DOE, FDA, DOD, and the NRC 
carry out risk assessment incident to the conduct of their missions.  These 
missions can include the deployment of potentially risky technology (e.g., 
NASA) or the regulation of risks (e.g., EPA). Going forward with projects 
related to their missions can create a conflict of interest within such 
agencies, if their activities pose a technological or other risk.  The 
understandable desire within the agencies to pursue their missions might 
create a bias toward downplaying such risk during the risk assessment 
science phase.  In other words, the risk assessors themselves might feel 
the agencies' preference for epistemological rigor, a preference for 
results unlikely to forestall the deployment of a technological project 
except under the most severe standards of demonstrated risk.  The results 
of risk assessment conducted under such subjective constraints might then 
limit the full range of often contradictory information presented to risk 
managers in the organs of governance.  In the case at hand, NASA 
acknowledged this internal tension and contracted out a variety of risk 
assessment projects to independent institutions.  Whether risk assessors 
hired under contract to such an agency thereby actually feel relieved of 
the pressure to downplay the extent of their client's risk remains unknown.  
The proposed project on the Cassini controversy will seek recollections of 
such pressure on the part of those who performed independent risk 
assessments for NASA, with an eye to evaluating the adequacy of directly 
"farming out" the risk assessment for mitigating internal bias toward 
minimizing Type II risks.


Public:  Uninvolved Masses and Activist Vanguard

Second, the public can be a powerful influence, particularly on elected 
risk management policy-makers.  With respect to any single technological 
hazard, the public can be divided into a largely inert and uninterested 
mass, a somewhat concerned but uninvolved sector, and a small politically 
active constituency taking individual or concerted actions.  Perhaps 
because of the focus in past hazard perception literature on the state of 
individual and public hazard understanding, often as a baseline for public 
education campaigns, there has not been a lot of attention paid to 
structuring the public along degrees of activism.  As argued in the hazard 
perception literature, most of the public is likely ignorant of the 
technical issues involved and unengaged in the debate.  A very small number 
of people, however, will learn about a given hazard enough to become 
aroused on one side or the other and then take time out of their lives to 
undertake citizen action over the issue.  

Some portion of these activists may be persons who easily adopt causes:  
the activism-oriented personality.  Others may become involved in a single 
issue because of important personal impacts.  The differentiation of 
activists in this way has not been essayed in the hazards literature 
before, and evidence on this characterization should be a significant 
addition to the hazards literature.  

Politically active individuals may act on their own or they may act in 
concert with organized campaigns.  Organized campaigns can make it easier 
for people to become active in a debate at far lower thresholds of concern, 
by providing simple actions, such as signing petitions, forwarding e-mail 
messages, or sending form letters to their representatives or the media.  
At this point, this element of the public becomes an influence on risk 
management policy-makers (and, sometimes, through published letters to the 
editor, on the rest of the public).  

The degree of influence they exert may reflect elected officials' 
perception of how widespread activist sentiment is and how that sentiment 
may affect voter turnout, above and beyond their own core political values 
and personal interests in an issue.  Gauging the appropriate level of this 
influence entails estimates of both the number of communications and the 
threshold of concern that can trigger the various forms of communication 
enumerated above.  The politician thus faces a political hazard, entailing 
judging the probabilities of Type I and Type II errors in responding to 
constituent opinions or not responding.  As with any risk assessors, 
politicians must estimate the worst political consequences of responding or 
not responding on a given issue and try to minimize the type of error that 
produces the most dreadful consequences (i.e., failure to be re-elected or 
elected to higher office). 

Hazard perception literature has identified some of the factors that govern 
individual hazard perceptions, such as various perceptual biases, trust in 
technology or in institutions creating or regulating risk, trust in 
individuals or organizations taking stances on a given issue, control and 
fairness issues, and general political orientation.  The proposed project 
will attempt to test these factors as they affect the movement of people 
from the uninvolved public to the politically active public.  It will also 
trace out the impacts of such recruitment on elected risk management 
policy-makers, as they face their own political risk assessment dilemma.


The Private Sector:  Contractors, Lobbyists, and Media

Third, the private sector includes a complex mix of often conflicting 
interests.  Some private concerns enter the arena of technological risk as 
creators and users of hazardous chemicals and technologies, and certain of 
these are contractors for Federal agencies, such as NASA.  They interact 
both with the organs of governance through political contributions and 
lobbying and with Federal agencies, lobbying for contracts on projects that 
may result in risk.  They, thus, try to exert suasion both on risk 
assessment and risk management.  

A key set of private concerns in this arena is the media.  As developed in 
the media criticism literature, media enter the technological risk debate 
with their own interests in sensation, human drama, and access to corporate 
and political elite decision-makers, which can affect public perception of 
hazards issues.  A complicating factor is the increasing ownership of media 
by corporations themselves involved in generating technological or chemical 
risks in the course of manufacturing or distribution activities.  Some of 
these parent corporations are themselves among the companies doing or 
bidding for contract work with Federal agencies, including NASA.  Also, 
private concerns involved with government contracting can have a great deal 
of influence on the public (and on local and state level government) 
through their roles as employers, quite possibly shaping public perception 
of and acceptance of risk.


Risk Production, Assessment, and Management and the Three Players

In short, in this and other technological risk debates, risk assessors in 
(or hired by) Federal agencies and risk managers in the three branches of 
Federal government (particularly the legislative and executive) communicate 
with one another across the conflicted terrains of the lay public and the 
private sector.  Past theorization of the risk assessment and risk 
management relationship has examined political ideology as an influence on 
that relationship but has not integrated the whole set of players and their 
interactions in this communication.  Past work on hazard perception has 
focused on the perceptions of individuals in the undifferentiated lay 
public.  Needed now is an understanding of the process by which public 
perceptions trigger activist recruitment and how activists deal with the 
cognitive dissonance of risk assessment results at variance with their own 
beliefs.  Also useful would be analysis of the frameworks by which 
politicians assess the resulting political hazards (risk assessment for 
political hazard?).  To what extent do politicians' assessments of the 
political risks of acting or not acting in response to constituent 
communications affect their ideologies of the reasonable balance between 
minimizing Type I and Type II errors in managing inherently uncertain 
technological hazards? How is that sense of proper balance communicated to 
and understood by agency personnel as they compete to propose mission goals 
and designs and then establish risk assessment guidelines?  Does agency 
direct subcontracting of risk assessment effectively protect risk 
assessment from agency and Congressional biases?  A critical and still 
underdeveloped part of the picture is the role of the media as they are 
used by parent corporations, agencies, and activists to affect public 
opinion, activist recruitment, risk management policy-making, and 
ultimately risk assessment. 
 

RESEARCH QUESTIONS

In the context of the controversy over the plutonium dioxide RTGs/RHUs on 
board Cassini-Huygens, the proposed study will focus on the relationship 
between risk management policy-makers in the Federal government 
(particularly Congress) and risk assessment science in or for NASA.  It 
will also delineate the perceptions of activists on both sides of the 
Cassini controversy and their recruitment into the debate.  The proposed 
study will also examine media coverage of the controversy.  Specific 
questions and hypotheses include the following.

First, risk assessors will be asked about their sense of where their risk 
assessment should fall on the Type I/Type II error minimization continuum 
(epistemological rigor versus risk conservatism).  The purpose of having 
outside scientists perform risk assessment for NASA was to mitigate the 
possible conflict of interest entailed in risk assessment by a mission-
committed agency.  One measure of success in this strategy would be the 
finding that outside scientists report enough concern with minimizing 
potential risk to humans that they would be willing to sacrifice 
epistemological rigor to ensure the largest estimates of such risk.  
Conversely, a self-reported emphasis on the logical and evidentiary 
soundness of analysis could indicate a desire to safeguard against the risk 
of forgoing a sound launch.  Respondents will be asked directly which 
decision error type they felt was of greatest concern to NASA and whether 
they experienced this perception as pressure.  The purpose of this line of 
inquiry is elucidation of the intra-governmental communication of tacit 
risk management preferences, from Congressional endorsement of a major 
international mission and its geopolitical ramifications, through NASA as 
the American coordinating body, to risk assessors in and contracted by 
NASA.  Can risk assessment contracted out directly from a mission-committed 
agency avoid contamination by that agency's natural desire to carry out its 
mission?  Is there a more effective way of engaging risk assessors in 
technological hazards?

Second, surveys sent to active opponents and proponents will ask about their 
own motivations for becoming active and their perceptions of the influences 
moving them from an uninformed and passive state to a more informed and active 
one.  Their reports on motivation will be classified by type (e.g., romance of 
space exploration, employment concerns, fear of genetic damage in an accident) 
and ranked by intensity of statements of each (i.e., little intensity to 
highly vehement statements).  They will also be asked about whether they have 
engaged in various levels of activism, from signing petitions to personal 
participation in activist organizations, on issues other than Cassini. This 
inquiry will shed light on the degree to which individual activism on an issue 
is triggered by concerns particular to it or whether this issue is just one of 
many that moves an already activist personality to take action.  It is 
expected that, in reporting influences on their transformation into activists 
on this issue, they will cite particular media pieces, as well as 
communications from individuals and reference groups they trust.  The 
reference groups, if offered, will be classified by general political ideology 
on a right to left continuum.  One purpose of this survey is to establish the 
specific role of the media in recruitment of activists in competition with the 
role of personal and reference group communications.  

Third, the sensation and blame-seeking elements of newsworthiness 
identified in past literature on media are likelier to result in a greater 
number of stories, more column space, and higher-priority placement of 
stories on the controversy over Cassini than about the design of the 
mission and its target system.  There should be more coverage of the former 
than the latter, because the controversy more effectively generates 
specific events that can initiate coverage than does the science involved, 
including sensation, blame-finding, clarity and simplicity of message, and 
human drama.  The act of coverage, then, itself may help to motivate 
disproportionate recruitment of the audience into opponent activism (as 
self-reported by activists).  On the other hand, it is possible that few 
activists will cite media pieces.  Media failure to cover an environmental 
controversy could dampen recruitment to just that level obtainable by 
direct personal and organizational communication with individuals.  

Fourth, if the arguments of media criticism are justified, corporations 
which both contract with NASA and hold media subsidiaries should experience 
conflict of interest between the media subsidiary's need for focus on the 
controversy and the parent corporation's need to avoid the oppositional 
recruitment effects of such coverage.  Response may entail direct or 
indirect pressure on editors to kill such stories, a pressure not without 
alleged precedent in other areas (Stevens 1998). Opponent literature on 
this issue indeed claims that there has been just such a media blackout on 
Cassini (Phillips 1997), which would prevent public outrage over the 
RTGs/RHUs on board.  The proposed study is in no position to estimate the 
"right" amount of coverage for such a controversy nor can it demonstrate 
that pressure to kill stories was applied by such conflicted corporations.  
It can, however, propose and evaluate a prediction logically derivable from 
the claim.  That is, media with such corporate parents will generate 
significantly fewer numbers of articles, less column space, and less 
prominent placement on front pages or section front pages of articles on 
controversial technological hazard issues.  Because of the conceivable 
existence of other factors that could produce such results, a finding in 
accordance with expectation would be less definitive than heuristic, but 
highly troubling for the democratic process in risk management.

Fifth, elected risk management policy-makers can be expected to report an 
increase in concern about the political risks of supporting future non-
military missions involving RTGs/RHUs.  As in any political controversy, 
politicians must try to gauge whether a given issue will change the mix of 
people likely to turn up at the polls in typical low-turnout American 
elections, make political donations, or volunteer to get out the vote.  
Reported opposition to a given technological hazard is, for reasons 
elaborated above, likely to exceed support for deployment.  It remains a 
hazardous guess for a politician to estimate the relationship between the 
mix of activist opinion and the mix of opinion on and the salience of such 
an issue among those in the wider public likely to vote.  The necessity of 
decision-making under conditions of such uncertainty should elevate self-
reported concern about the consequences of such decisions.  One 
contribution of the proposed project is extension of risk assessment/risk 
management theory into the processes by which key decision-makers deal with 
their own political risk assessment conundrum!

Last, attempts by activists to affect the milieu of risk management policy 
can be expected to focus on the legitimacy of the risk assessment reported 
by NASA.  Even as there might be a conflict of interest in risk assessment 
done on behalf of an agency with a mission involving the deployment of a 
potentially risky technology, so, too, activists might respond to a 
conflict between their opinions and a risk assessment report by attacking 
the report.  Past literature on hazard perception has argued that people 
often make up their minds about an issue before being exposed to enough 
facts and arguments.  If so, the cognitive dissonance produced by anti-
Cassini activists' opinions of RTG/RHU use and their confrontation with the 
risk assessment, which concluded that the risk was vanishingly small, could 
be resolved in one of three ways:  (1) a softening of their opposition, (2) 
a detailed criticism of the risk assessment for those with the technical 
background to follow it, or, (3) for most opponents who deal with the risk 
assessment, a search for another means of dismissing its conclusions.  The 
third option leads to a hunch that opponents of a technological application 
will resort to conspiracy-theorizing to dismiss inconvenient conclusions, 
tapping into Douglas' and Wildavsky's border culture tendency to suspicion 
of rational analysis done by a centrist organization.  Usually, opposition 
to a given technology has come out of the political Left (e.g., nuclear 
power, levees) or the Right (e.g., fluoridation, RU486) but rarely both.  
The Cassini controversy has shown elements of both, and, if the expectation 
that conspiracy-theorizing will be a common response to the risk 
assessment, different conspiracies may be evoked for the purpose of 
undermining the risk assessment science. An important part of the public 
impact on risk management is necessarily a questioning of risk assessment.  
Conspiracy-theorizing is a way by which an opinionated activist public can 
step outside the normal rules governing the relationship between risk 
management and risk assessment and force players in the Federal sector onto 
a turf disempowering to them.   Absurd conspiracy theories need exploration 
as a politically powerful recruitment device due to their function in 
blunting cognitive dissonance.


DATA AND METHODS 

In order to pursue these research questions, the following types of data 
will be collected:

     1)   Questionnaires will be sent by mail (with telephone and e-mail 
          follow-up to raise response rates) to the 22 individuals named as 
          contributing to the risk assessments at the end of the Final 
          Environmental Impact Statement for the Cassini Mission (NASA 
          1995) and the four additional people named in the Final 
          Supplemental Environmental Impact Statement for the Cassini 
          Mission (NASA 1997).  These structured qualitative questionnaires 
          will be in an open-ended format to elicit responents' feelings 
          about the standards of evidence needed to establish risk 
          probability ranges in a situation involving nuclear technology 
          (their sense of the proper trade-off between Type I and Type II 
          failures).  It will also solicit their sense of the risk 
          management guidance from NASA and the ways this might have 
          affected their research designs.  These questionnaires will be 
          analyzed with standard literature content analytic methods.

     2)   Activist messages will be collected from the following sources:  
          American newspaper and newsmagazine letters to the editor found 
          through Academic Universe (1995-1999); Usenet postings found 
          through Deja News (1997-1999); listserver postings sent to an 
          array of listservers subscribed to for the purpose with archive 
          searches on those lists providing that service (back to 1995, if 
          possible); and web pages posted by activists (1997-1999).  These 
          texts will first be processed by simple counts of messages and 
          words by stance (oppositional, advocacy, and neutral) to 
          establish the balance of opinion among people taking the time to 
          communicate their thoughts on the issue.  Sentences will be 
          further processed with literature content analytic methods 
          entailing categorization of statements and ranking of intensity 
          of statements by motivation (e.g., dread, mistrust of NASA, 
          enthusiasm for space exploration, concern about economic effects 
          of aborting the launch or fly-by).  Opponent writings will be 
          further examined for their dealings with the risk assessment 
          performed for NASA.  Proportions attacking it on technical 
          grounds and delegitimating it through conspiracy-theorizing will 
          be calculated and the kind of conspiracy evoked will be related 
          to political stance.  The resulting nominal and ordinal data can 
          be further processed with non-parametric statistical techniques 
          to construct the images of the Cassini mission and risk 
          assessment process used by NASA, which is held by proponents and 
          opponents.  

     3)   The most active proponents and opponents identified from activist 
          messages by frequency and/or volume of communications will be 
          sent an open-ended questionnaire asking them how they learned 
          about the plutonium RTGs/RHUs on Cassini-Huygens and what their 
          motivations were for becoming so involved in the issue.  At least 
          30 individuals on each side will be approached by e-mail or mail 
          (after directory searches on WhoWhere, Bigfoot, and Netscape 
          people finders and telephone directories).  They will be 
          specifically asked about the relative importance of various 
          media, friends and family, employers, and reference groups in 
          shaping their images of the program and their decisions to act.  
          They will also be queried about general political orientation as 
          well.  Their interpretations of the risk assessment done for NASA 
          will also be elicited.  A brief follow-up questionnaire will be 
          submitted after the flyby to see if its outcome changes their 
          attitudes towards the use of RTGs/RHUs.  Returned questionnaires 
          will be processed by categorization and frequency counts and 
          ranking of intensity.  

     4)   Articles on Cassini will be located in newspapers and 
          newsmagazines through Academic Universe.  Attempts will also be 
          made to find television stories by searching national networks' 
          web page archives and ordering copies of any stories so 
          identified.  Their prominence will be assessed by page (or show) 
          placement and length.  Their foci will be placed on a continuum 
          between reporting on the purpose of the mission and the 
          environmental controversy over the plutonium RTGs/RHUs.  The 
          specific images of Cassini in particular and technological hazard 
          in general will be constructed from standard literature content 
          methods for comparison with activist images.

     5)   The ownership structure of all national and nationally-important 
          regional print media will be described.  The goal is to learn if 
          any prominent media are part of corporations that do contract 
          work for NASA's solar system exploration division.  Number, size, 
          and prominence of stories on Cassini can be compared between 
          entities with NASA involvement and those without to test the 
          claim by Cassini opponents that there has been a media blackout 
          on Cassini and its plutonium load.

     6)   The impact of the controversy on elected risk managment policy-
          makers is the target of the next type of data collection.  
          Interviews will be conducted with the staffs of the nine senators 
          on the Senate Committee for Commerce, Science, and Transportation 
          Subcommittee on Science, Technology, and Space and of the 25 
          members of the House Science Committee Subcommittee on Space and 
          Aeronautics.  The purpose of these structured qualitative 
          interviews is to obtain estimates of the volume of constituent 
          communications on the Cassini controversy, the balance of opinion 
          among them, and staff and elected officials' beliefs as to the 
          representativeness of these communications.  Additionally, these 
          interviews will ask about how politicians gauge the consequences 
          of acting in accordance with the majority opinion in the activist 
          communications or not.  They will be asked how constituent 
          communications may have affected their opinions on the use of 
          RTGs/RHUs in future space missions, willingness to fund further 
          exploration of the solar system in competition with other social 
          goals, and beliefs concerning NASA's trustworthiness in the 
          assessment and deployment of potentially hazardous technologies.  
          If at all possible, interviews with the senators and 
          congressional representatives will be arranged to hear their 
          views on these issues from them directly.  As with the activist 
          questionnaires, officials and/or staff will be approached again 
          after flyby to assess changes produced by the outcome of the 
          gravity-assist.  These interviews should add to knowledge of the 
          specific ways the activist public impacts the risk management 
          responsibility of the legislative branch in a democratic society 
          and the effects of successful or disastrous outcomes on a 
          specific case of such risk management.


TIMELINE OF PROPOSED ACTIVITIES

Ongoing self-funded preparatory activities include continued reading in the 
scholarly literatures most germane to the project (most recently, game 
theory), reading the environmental impact statements prepared by NASA, 
building my contacts within the Cassini team and among the most prominent 
critics of the project, examining web sites created by NASA and its 
critics, downloading Usenet postings for future analysis, attending 
conferences to disseminate progress reports on the project and solicit 
criticism, and organizing a special session on Cassini for the 24th Annual 
Hazard Research and Applications Workshop.  Other activities can be mapped 
out for more discrete blocks of time.

By June of 1999, three students will have been selected, hired, and trained 
for specific tasks.  These will probably be master's students associated 
with the Center for Hazards Research.  Two will each be hired for fifteen 
hours a week for a full year's duration, beginning in the summer of 1999, 
and the third will be hired for ten hours a week for the summer and fall of 
1999.

By July, the research assistants will have taken over the tasks of 
following, downloading, compiling, and printing Usenet postings and they 
will begin Academic Universe searches for articles and letters to the 
editor on Cassini and the RTG controversy.  Both the Usenet and article 
searches will go on throughout the summer of 1999, when controversy over 
the issue will be building to a peak expected around the flyby (scheduled 
for the 18th of August).  The controversy will ebb in the months after the 
flyby, much as it did following launch, suddenly if all goes well and more 
gradually if there is some kind of mishap.  Usenet and article searches 
will continue until December of 1999.

During early summer of 1999 before the flyby, open-ended questionnaires 
will be developed and sent to at least 30 activists on each side of the 
Cassini controversy.  The questionnaires will be e-mailed to the identified 
activists on Usenet and the WWW and mailed to others after searches in 
Internet people finders and telephone directories.  E-mail and telephone 
follow-up will be performed to increase response rates.  A brief follow-up 
will be designed and submitted to the same individuals after the flyby to 
assess the impact of the outcome of the flyby on their perceptions.  

Analysis of the activist questionnaires will ensue toward the end of 
October, 1999.  All results will be presented in aggregated form and/or in 
anonymous anecdotal form to preserve respondents' privacy.

Also during the summer of 1999, questionnaires will be developed for 
administration to the 26 individuals listed in the Cassini environmental 
impact statements as having responsibility for the assessment of the risk 
from launch or flyby accidents involving the RTGs/RHUs.  Administration of 
the questionnaires will begin in the month before the flyby, with any 
necessary follow-up continuing into October.

Summer of 1999 will also see correspondence with Federal elected office-
holders or their staffs.  This correspondence will be to set up structured 
qualitative telephone interviews with staff in the summer before flyby and 
with the office-holders themselves if at all possible.  They will be 
queried again after the flyby, as well.

Content analysis of all print media articles collected from searches in 
Academic Universe and on any television or radio coverage uncovered will go 
on mostly during fall of 1999.  Data entry and processing should occupy the 
fall of 1999 and beginning of 2000.

The latter part of spring 2000 will be devoted to writing up findings.  
Final results will be presented to the American Association for the 
Advancement of Science, the Association of American Geographers, and a 
panel in the 25th Annual Hazards Research and Applications Workshop in 
Boulder, Colorado.  The graduate students will be encouraged to present 
individual facets of the project to regional professional associations.  
Publication in refereed national journals is also planned, and the project 
is of a scale that may additionally result in a book as well.  


THE BROADER RAMIFICATIONS OF THE STUDY

The whole Cassini controversy, particularly the risk assessment done under 
the aegis of NASA, is likely to be understood, both by supporters and 
opponents of the mission, as part of the growing delegitimation of science 
in American cultural and political discourses.  Science is a particular 
approach to the production of reliable knowledge, which is undeniably 
difficult to master, the more so when the phenomenon under study is 
inherently uncertain. Education in science has been deteriorating for a 
variety of reasons, undermining public understanding of technological risk 
assessment.  The scientific outlook is the product of years of 
apprenticeship to a necessarily hierarchical guild, and yet science on a 
daily basis transforms ordinary human life (Sagan 1996).  The significance 
of scientific impacts on human life implies the necessity of democratic 
oversight, but that oversight presumes mastery of science, which is not 
widespread.  This gap engenders some public fear, frustration, and even 
resentment of the intellectual elitism inherent in science, as much as 
admiration for many of its products.  Science as a process and its 
practitioners are sometimes blamed for the abuses of some of its results, 
notably in the military application of nuclear science, in this case, very 
directly affecting Cassini.  These questions of fairness in payment for, 
control over, and benefits from science have led to academic attacks on the 
very epistemological validity of science, in the form of the 
deconstructionist and postmodernist movements presently sweeping the 
humanities, arts, social sciences, and even some of the natural sciences.  
This background of delegitimation of science, both popular and academic, 
may weaken the hand of technology proponents and strengthen that of 
opponents in the technology management issues to come.  This broad cultural 
trend is already producing further decline of the American space program, 
and the Cassini controversy both illustrates and reinforces the general 
trend.  The proposed project will establish how this controversy will 
impact future space missions involving RTGs/RHUs by the United States and 
other space-faring nations.  

Above and beyond the case of Cassini, this project contributes to risk 
management theory in the following ways.  At a general level, it will 
holistically model the key players in a national technological hazard 
controversy, their internal tensions, and the specific channels by which 
they interact with one another in the risk assessment and risk management 
policy relationship.  More specifically, it will represent the interaction 
of a mission-committed agency, its allies in the private sector, and 
proponent activists, on the one hand, with opponent activists, on the other 
hand, through the process of activist recruitment from the uninvolved 
public, in order to bring pressure to bear on risk management policy-
makers.  Part of that process is the shaping of public perceptions through 
media coverage (or non-coverage), which is likelier to transform people 
into opponent activists (or leave them in the uninvolved public) than to 
transform them into proponent activists.  Another key part is activist 
resolution of cognitive dissonance in facing risk assessment analyses which 
essentially dismiss the hazard -- technical deconstruction of the risk 
assessment or conspiracy-theorizing -- both of which can be used for 
recruitment.  The result of this process of recruitment is pressure on 
elected policy-makers, who must react to this pressure under their own 
conditions of political uncertainty and risk.  That is, to which extent are 
the criticisms of risk assessment presented by opponents valid?  To which 
extent is an opponent campaign really representative of public opinion or, 
more importantly, of probable voters? To which extent can failure to 
respond to such pressure be offset by private sector political campaign 
donations?


RELATIONSHIP TO PI'S OTHER WORK

The proposed project on social construction of the plutonium aboard the 
Cassini-Huygens spacecraft carries forward my work in hazards over the past 
five years or so.  My earliest published work on hazards (Rodrigue 1993) 
differentiated hazard risk from hazard vulnerability in the case of 
chaparral fire hazard in montane Southern California.  It specifically 
addressed social differences in risk and vulnerability to this hazard.  
Perceived differences in social vulnerability, of fairness, are one of the 
key questions in the proposed project and in technological hazards research 
in general.

Having been caught in the epicentral community of the "Northridge" 
earthquake of 1994 (Reseda), I focused my growing interest in hazards on 
that particular event.  It is in this earthquake project that I first 
developed an interest in media construction of a hazard and expanded my 
competence in literature content analysis.  With my collaborators, Eugenie 
Rovai and Susan E. Place, I compared actual damage patterns in the Los 
Angeles City Department of Building and Safety database with patterns of 
media attention.  Many communities were grossly undercovered or 
overcovered, and there were sharp contrasts in the demographic attributes 
of the two.  These disparities were echoed in telephone surveys of local 
residents' mental maps of the disaster and, disturbingly, in the patterns 
of recovery from the disaster through time (Rodrigue and Rovai 1995; 
Rodrigue, Rovai, and Place 1997).  My work on this event left me with an 
enduring concern for the interplay between media representation of hazard 
in general and public environmental perception, and these are among the key 
questions in the proposed project.

Another recent project was the examination of this general question in the 
case of the 1998 El Niño-attributed floods and mudslides in Southern 
California (Rodrigue and Rovai 1998b).  This project entailed a short field 
visit to Southern California on the part of Eugenie Rovai and myself, with a 
team of graduate students (Adam Henderson, James Hotchkiss, and Stacy Potter).  
The costs of the project (roughly $1,600) were reimbursed by the University of 
Colorado, Boulder, as part of their Quick Response program.  The QR program is 
funded ultimately by NSF, under Grant No. CMS-9632458.  Results of the study 
were that the print media (Los Angeles Times) actually had spatially 
well-balanced coverage compared to local residents' mental maps, an effect 
that could not be accounted for.  The media "hype" of El Niño did have 
a number of salutary effects on local residents, such as raising expectations 
of a very bad winter and encouraging maintenance of emergency kits.  The 
results of this field study cautioned me against automatic bias against media 
representations of hazard as, in this case, one of them did a creditable job.  
An impartial attitude can only enhance the proposed project.

I recently completed a California State University System Technology 
Learning Productivity Project with Dr. Eugenie Rovai on coordinating two 
different classes (my Natural Hazards course and her Advanced Cartography 
course) to create student web pages on nine California disasters, featuring 
clickable web map interfaces.  The project amounted to about $25,000, 
supported two graduate students at ten hours a week for an academic year 
and involved a third through a credit internship.  It eventuated in a 
national professional presentation (Rodrigue and Rovai 1998a), a campus 
workshop presentation, and a month-long web exhibition of the students' 
work.  A publication is under development, as well.  As demanded by the 
proposed project, I have experience supervising teams working on complex 
problems with successful and well-disseminated results.

In short, the proposed project carries forward my dominant concerns over 
the interplay between media and public hazard perception.  It marks a new 
development for me, however, in that the proposed project is concerned with 
technological hazard, rather than the natural hazards that have occupied my 
attention for the past five or six years.  The proposed project also 
demands of me a deepened understanding of hazard perception, 
differentiating the perceptions of the public, of activists, risk 
assessment experts, and elected risk managers/policy decision-makers.  In 
the future, I expect that, because of this proposed project, I will be 
moving more into the area of technological hazards and risk communication, 
as the latter affect public perception of such hazards and political 
pressure on elected officials. 


SECTION D:  REFERENCES CITED

Augustine, Norman. 1998.  What We Don't Know Does Hurt Us:  How Scientific 
     Illiteracy Hobbles Society.  Science 279 (5357):  1640-1641.

Bagdikian, Ben Haig.  1997.  The Media Monopoly, 5th ed.  Boston:  
     Beacon.

Berglund, Eeva K. 1998.  Knowing Nature, Knowing Science: An Ethnography of 
     Local Environmental Activism.  Cambridge:  White Horse Press.

Blaikie, Piers; Cannon, Terry; Davis, Ian; and Wisner, Ben. 1994. At Risk: 
     Natural Hazards, People's Vulnerability, and Disasters. London and New 
     York: Routledge.

Breyer, Stephen. 1998.  The Interdependence of Science and Law.  Science 
     280, 5363 (24 April):  537-538.

Brown, Halina Szejnwald, and Goble, Robert L. 1990.  The Role of Scientists in 
     Risk Assessment.  Risk 1, 4 (fall): 283 ff.

Clarke, Arthur (Sir). 1998.  Presidents, Experts, and Asteroids.  Science 
     280, 5369 (5 June):  1532-1533.

Cohen, Jeff, and Solomon, Norman. 1995.  Through the Media Looking Glass : 
     Decoding Bias and Blather in the News.  Monroe, ME:  Common Courage 
     Press.  

Cohen, Jeff, and Solomon, Norman.  1993.  Adventures in Medialand : Behind 
     the News, Beyond the Pundits.  Monroe, ME:  Common Courage Press.

Covello, Vincent. 1991.  Risk Comparisons and Risk Communication.  In 
     Communicating Risk to the Public, ed. Roger E. Kasperson and Pieter 
     Jan M. Stallen.  Dordrecht, NL:  Kluwer.

Covello, Vincent T.; Sandman, Peter M.; and Slovic, Paul.  1991.  Guidelines for 
     Communicating Information about Chemical Risks Effectively and Responsibly.  
     In Acceptable Evidence:  Science and Values in Risk Management, ed. 
     Deborah G. Mayo and Rachelle D. Hollander, pp. 66-90.  New York and Oxford:  
     Oxford University Press.

Cranor, Carl F. 1997.  The Normative Nature of Risk Assessment: Features and 
     Possibilities.  Risk 8, 2 (spring): 123 ff.  

Cranor, Carl F. 1990. Scientific Conventions, Ethics and Legal Institutions.  
     Risk 1, 2 (spring): 155 ff.

Dawson, Sandra M. 1998. Cassini Launch Approval Planning Engineer.  Personal 
     communication critiquing this proposal (5 August).

Douglas, Mary, and Wildavsky, Aaron. 1982.  Risk and Culture:  An Essay on 
     the Selection of Technical and Environmental Dangers.  Berkeley, Los 
     Angeles, and London:  University of California Press.

Dunwoody, Sharon.  1994.  Community Structure and Media Risk Coverage.  
     Risk 5, 3 (summer): 193 ff.

Elliott, Deni.  1989.  Tales from the Darkside: Ethical Implications of Disaster 
     Coverage.  In Bad Tidings:  Communication and Catastrophe, ed. Lynne 
     Masel Walters, Lee Wilkins, and Tim Walters, pp. 161-170. Hillsdale, NJ: 
     Lawrence Erlbaum Associates.

Faludi, Susan. 1991.  Backlash:  The Undeclared War against American 
     Women.  New York:  Crown Publishers.

Fischhoff, Baruch. 1994.  Acceptable Risk:  A Conceptual Proposal.  Risk 
     5, 1 (winter): 1 ff.

Florida Coalition for Peace and Justice. No date.  "Project Censored" Names 
     Cassini #1.  In FCPJ:  Florida Coalition for Peace and Justice web page, 
     http://www.afn.org/~fcpj/index.htm.

Friedman, Sharon M. 1994.  The Media, Risk Assessment and Numbers: They Don't 
     Add Up.  Risk  5, 3 (summer): 203 ff.

Fritzsche, Andrew F. 1996. The moral dilemma in the social management of risks.  
     Risk 7, 3: 291 ff.

Gans, Herbert J.  1989.  Deciding What's News : A Study of CBS Evening News, 
     NBC Nightly News, Newsweek, and Time.  New York: Random House.

Giere, Ronald N. 1991.  Knowledge, Values, and Technological Decisions:  A 
     Decision-Theoretic Approach.  In Acceptable Evidence:  Science and 
     Values in Risk Management, ed. Deborah G. Mayo and Rachelle D. 
     Hollander, pp.  183-203.  New York and Oxford:  Oxford University Press.

Grossman, Karl. 1996a. Don't Send Plutonium into Space.  Progressive Media 
     Project (May).

Grossman, Karl. 1996b. Risking the World:  Nuclear Proliferation in Space. 
     Covert Action Quarterly (Summer).

Haraway, Donna.  1990.  Primate Visions:  Gender, Race and Nature in the 
     World of Modern Science.  London and New York:  Routledge.

Harman, Jay R.; Harrington, John A., Jr.; and Cerveny, Randall S.  1998.  
     Balancing Scientific and Ethical Values in Environmental Science.  
     Annals of the Association of American Geographers 88(2): 277-286.

Heiman, C.F. Larry. 1997.  Acceptable Risks:  Politics, Policy, and Risky 
     Technologies.  Ann Arbor:  The University of Michigan Press.

Henderson-Sellers, Ann.  1998.  Communicating Science Ethically:  Is the 
     "Balance" Achievable?  Annals of the Association of American 
     Geographers 88(2): 301-307.

Herman, Edward S. and Chomsky, Noam.  1988.  Manufacturing Consent: The 
     Political Economy of the Mass Media.  New York:  Pantheon Books.

Hohenemser, Christoph, and Kasperson, Jeanne X. 1982.  Introduction.  In Risk 
     in the Technological Society,  ed. Christoph Hohenemser and Jeanne X. 
     Kasperson, pp. 1-11.  Boulder, CO:  Westview Press.

Hollander, Rachelle. 1991.  Expert Claims and Social Decisions:  Science, 
     Politics, and Responsibility.  In Acceptable Evidence:  Science and 
     Values in Risk Management, ed. Deborah G. Mayo and Rachelle D. 
     Hollander, pp. 160-173.  New York and Oxford:  Oxford University Press.

Jasanoff, Sheila. 1991.  Acceptable Evidence in a Pluralistic Society. In 
     Acceptable Evidence:  Science and Values in Risk Management, ed. 
     Deborah G. Mayo and Rachelle D. Hollander, pp. 29-47.  New York and Oxford:  
     Oxford University Press.

Johnson, Branden. 1993. Advancing Understanding of Knowledge's Role in Lay Risk 
     Perception.  Risk 4, 3 (summer): 189 ff.

Kasperson, Roger E. and Kasperson, Jeanne X. 1991.  Hidden Hazards.  In 
     Acceptable Evidence:  Science and Values in Risk Management, ed. 
     Deborah G. Mayo and Rachelle D. Hollander, pp. 9-28.  New York and Oxford:  
     Oxford University Press.

Kasperson, Roger E. and Stallen, Pieter Jan M., eds.  1992.  Communicating 
     Risk to the Public. Dordrecht, NL:  Kluwer.

Kunreuther, Howard; Slovic, Paul; and MacGregor, Donald. 1996. Risk Perception 
     and Trust: Challenges for Facility Siting. Risk 7, 2 (spring): 109 
     ff.

Ledingham, John A. and Walters, Lynne Masel.  1989.  In Bad Tidings:  
     Communication and Catastrophe, ed. Lynne Masel Walters, Lee Wilkins, 
     and Tim Walters, pp. 35-45. Hillsdale, NJ: Lawrence Erlbaum Associates.

Lee, Martin A. and Solomon, Norman.  1991.  Unreliable Sources : A Guide to 
     Detecting Bias in News Media.  New York: Carol Publishing Group.

McGarity, Thomas O. 1990.  Public Participation in Risk Regulation.  Risk 
     1, 2 (spring):  103 ff.

McChesney, Robert W. 1997.  Corporate Media and the Threat to Democracy 
     (Open Media Pamphlet Series).  New York:  Seven Stories Press.

Margolis, Howard. 1996.  Dealing with Risk:  Why the Public and the Experts 
     Disagree on Environmental Issues.  Chicago and London:  The University 
     of Chicago Press.

Mayo, Deborah G. 1991.  Sociological versus Metascientific Views of Risk 
     Assessment.  In Acceptable Evidence:  Science and Values in Risk 
     Management, ed. Deborah G. Mayo and Rachelle D. Hollander, pp. 249-279.  
     New York and Oxford:  Oxford University Press.

Mazur, Allan.  1998.  A Hazardous Inquiry:  The Rashomon Effect at Love 
     Canal.  Cambridge, MA, and London:  Harvard University Press.

Mazur, Allan. 1994.  Technical Risk in the Mass Media.  Risk 5, 3 
     (summer):  189 ff.

Merchant, Carolyn.  1990.  The Death of Nature : Women, Ecology, and the 
     Scientific Revolution.  San Francisco:  Harper.

NASA. 1997.  Final Supplemental Environmental Impact Statement for the 
     Cassini Mission.  Washington, DC:  Office of Space Science, National 
     Aeronautics and Space Administration.

NASA. 1995. Final Environmental Impact Statement for the Cassini Mission.  
     Washington, DC:  Solar Exploration Division, Office of Space Science, 
     National Aeronautics and Space Administration.

NASA.  1992.  National Environmental Policy Act; Outer Solar System Exploration 
     Program: Information Update, Notice 92-58.  Federal Register 57 FR 
     46198.

Peters, Hans Peter. 1994.  Mass Media as an Information Channel and Public 
     Arena.  Risk 5, 3 (summer): 241 ff.

Phillips, Peter. 1997.  Censored 1997: The News That Didn't Make the News 
     --The Year's Top 25 Censored News Stories.  New York:  Seven Stories 
     Press (the volume listing the plutonium on board Cassini as the top most-
     censored story of 1996). 

Quarantelli, E.L. 1989.  A Study of Disasters and Mass Communication. In Bad 
     Tidings: Communication and Catastrophe, ed. Lynne Masel Walters, Lee 
     Wilkins, and Tim Walters, pp. 1-19. Hillsdale, NJ: Lawrence Erlbaum 
     Associates.

Rodrigue, Christine M. and Rovai, Eugenie.  1998a.  Construction  of an 
     Interactive Map for the Web by Students in Paired Classes.  Presentation to 
     the National Social Science  Association, San Diego  (April).

Rodrigue, Christine M. and Rovai, Eugenie.  1998b.  El Niño and 
     Perceptions of the Southern California Floods and Mudslides of 1998.  With 
     the assistance of Adam Henderson, James Hotchkiss, and Stacy Potter. 
     Quick Response Report 107, Natural Hazards Center, University of 
     Colorado, Boulder.  http://www.Colorado.EDU/hazards/qr/qr107.html.

Rodrigue, Christine M. and Rovai, Eugenie. 1995. The "Northridge" Earthquake:  
     Differential Geographies of  Damage, Media Attention, and Recovery."  
     National Social Science Perspectives Journal 7, 3:  97-111.  

Rodrigue, Christine M.; Rovai, Eugenie; and Place, Susan E. 1997. Construction 
     of the "Northridge" Earthquake in Los Angeles' English and Spanish Print 
     Media: Damage, Attention, and Skewed Recovery. Presentation to the Southern 
     California Environment and History Conference, Northridge, CA. 
     http://www.csuchico.edu/geop/chr/scehc97.html.

Sagan, Carl. 1996.  The Demon-Haunted World:  Science as a Candle in the 
     Dark.  New York:  Ballantine Books.

Sandman, Peter M. 1994.  Mass Media and Environmental Risk: Seven Principles.  
     Risk 5, 3 (summer):  251 ff.

Scanlon, Joseph. 1989.  The Hostage Taker, the Terrorist, the Media:  Partners 
     in Public Crime.  In Bad Tidings:  Communication and Catastrophe, 
     ed. Lynne Masel Walters, Lee Wilkins, and Tim Walters, pp. 115-130. 
     Hillsdale, NJ: Lawrence Erlbaum Associates.

Schechter, Danny; Browne, Jackson; and McChesney, Robert W.  1997.  The More 
     You Watch, the Less You Know:  News Wars/(Sub)Merged Hopes/Media  
     Adventures.  New York:  Seven Stories Press.

Shain, Russell E. 1989.  It's the Nuclear, Not the Power and It's in the 
     Culture, Not Just the News.  In Bad Tidings:  Communication and 
     Catastrophe, ed. Lynne Masel Walters, Lee Wilkins, and Tim Walters, pp.  
     149-160. Hillsdale, NJ: Lawrence Erlbaum Associates.

Shrader-Frechette, Kristin S. 1998.  First Things First:  Balancing Scientific 
     and Ethical Values in Environmental Science.  Annals of the Association 
     of American Geographers 88(2): 287-289.

Shrader-Frechette, Kristin S. 1995.  Evaluating the expertise of experts.  
     Risk 6, 2 (spring): 115 ff.

Shrader-Frechette, Kristin S. 1990a.  Perceived risks versus actual risks:  
     Managing hazards through negotiation.  Risk 1, 4 (fall): 341.

Shrader-Frechette, Kristin S. 1990b.  Scientific method, anti-foundationalism 
     and public Decisionmaking.  Risk 1, 1 (winter):  23 ff.

Silbergeld, Ellen K. 1991.  Risk Assessment and Risk Management:  An Uneasy 
     Divorce. In Acceptable Evidence:  Science and Values in Risk 
     Management, ed. Deborah G. Mayo and Rachelle D. Hollander, pp. 99-114.  
     New York and Oxford:  Oxford University Press.

Singer, Eleanor, and Endreny, Phyllis M.  1994.  Reporting on Risk: How the Mass 
     Media Portray Accidents, Diseases, Disasters and Other Hazards.  
     Risk 5, 3 (summer):  261 ff.

Slovic, Paul. 1991.  Beyond Numbers:  A Broader Perspective on Risk Perception 
     and Risk Communication.  In Acceptable Evidence:  Science and Values in 
     Risk Management, ed. Deborah G. Mayo and Rachelle D. Hollander, pp. 48-
     65.  New York and Oxford:  Oxford University Press.

Slovic, Paul; Fischhoff, Baruch; and Lichtenstein, Sarah.  1982.  Rating the 
     Risks:  The Structure of Expert and Lay Opinions.  In Risk in the 
     Technological Society, ed. Christoph Hohenemser and Jeanne X. 
     Kasperson.  AAAS Selected Symposium 65:  141-166.  Boulder, CO:  Westview 
     Press.

Smith, Conrad. 1992.  Media and Apocalypse: News Coverage of the Yellowstone 
     Forest Fires, Exxon Valdez Oil Spill, and Loma Prieta Earthquake. 
     Westport, CN, and London:  Greenwood Press.

Soja, Edward W. 1991.  Postmodern Geographies: The Reassertion of Space in 
     Critical Social Theory. New York: Verso.

Solomon, Norman and Cohen, Jeff.  1997.  Wizards of Media Oz:  Behind the 
     Curtain of Mainstream News.  Monroe, ME:  Common Courage Press.

Spilker, Linda J., ed.  1997.  Passage to a Ringed World:  The Cassini-
     Huygens Mission to Saturn and Titan.  Washington, DC:  NASA.

Stallings, Robert A. 1994.  Hindsight, Organizational Routines and Media Risk 
     Coverage.  Risk 5, 3 (summer): 271 ff.

Steinem, Gloria. 1990. Sex, lies and advertising. Ms. Premiere issue of 
     ad-free format: 18-28.  

Stevens, Elizabeth Lesley. 1998.  Mouse-ke-fear.  Brill's Content 
     (November): 94-103.

Weldon, Dave. 1997.  NASA's Cassini Mission Is Safe.  Space News 
     (September 22-28) http://www.house.gov/weldon/.


© Christine M. Rodrigue,Ph.D., all rights reserved
first placed on web: 01/14/99
last revised: 01/22/99