Risk Assessment Science and Risk Management Policy
Risk assessment science is that diverse family of sciences that describes hazardous situations and analyzes the physical and biological mechanism underlying them and the social processes putting people in their way. Risk assessment science tries to quantify the probabilities of their producing disastrous events, as well as the specific risks such events pose for different human groups and social assets. Among fields with risk assessment functions among their broader concerns and mandates:
- Physical sciences: geology, physical geography, meteorology, climatology, hydrology, oceanography
- Life sciences: epidemiology, toxicology, medical geography, biogeography, ecology, zoology, botany
- Social sciences: psychology, sociology, anthropology, human geography, health geography, political science, economics
- Applied sciences:
- Physical: structural engineering, civil engineering, electrical engineering, mechanical engineering, aeronautical engineering
- Life: medicine, nursing, veterinary medicine, forensics
- Social: ergonomics; public health; urban, environmental, and regional planning; social workers
Risk management is a broad family of decision-making functions, which depend on understanding the risk assessment sciences, plan for disasters and emergencies, and develop policies to mitigate risk, prepare for the worst, manage events, and coördinate recovery from events.
- Risk management includes those people who directly deal with the consequences of disaster: "boots on the ground"
- First responders, including firefighters, police, paramedics
- "Pre" first responders, including family members, neighbors, spontaneously self-organizing communities of helpful strangers
- Emergency medical services: emergency room physicians and nurses and those with allied skills that might be able to work under their direction in a truly massive emergency (psychiatristsm, dentists, acupuncturists, chiropractors, veterinarians)
- Risk management also includes people with policy responsibilities:
- Planners (urban, small town, regional, environmental), who are mandated to write and update municipal and county general plans (including their safety elements) or coördinate these among regional entities (e.g., Southern California Association of Governments, Bay Area Council of Governments) or the state
- People with managerial responsibilities at higher administrative levels within first-responder agencies (e.g., district chiefs, fire marshals, fire chiefs, and fire commissioners in fire departments; deputy chiefs and police chiefs in police departments): emergency managers
- People appointed to high managerial positions within private and public institutions (e.g., hospitals, port authorities, airport authorities, transit districts, utilities, OES, FEMA, DHS, chemical and petroleum companies ...) with significant risk management responsibilities: also commonly referred to as emergency managers
- Elected policy makers (e.g., mayors, council representatives, boards of supervisors, governors, state legislators, Congressional representatives and senators, the president)
Risk assessment science necessarily deals with inherently uncertain situations. Even in the best-understood systems, there are uncertainties about how specific processes can produce specific consequences.
- Will we ever be able to predict earthquakes enough to issue forecasts or warnings?
- Will we ever be able to predict the weather, given the chaotic molecular processes underlying it?
- Will we ever be able to predict that a given bridge of a given construction subject to given traffic conditions will spontaneously fail?
- Will we ever be able to predict which individuals under which circumstances become susceptible to terrorist or paranoid ideation and when they will carry out a major terrorist incident (let alone which type of incident)?
It is difficult for risk assessment scientists to communicate the nature of the uncertainty surrounding their analyses to risk managers in terms that an outsider to their field can process and understand. Humans, in general, have real problems understanding probability (even statisticians!), let alone probabilities in some subject in which they have little background.
Hypotheses in sciences are decided in conditions of unavoidable uncertainty, but it is possible under these circumstances to make rational choices. The notion of a Type I and a Type II error, as used in statistics, is useful here.
- A Type I error is seeing a big effect and concluding from it that something important is going on, when, in fact, random chance is producing the effect by coïncidence.
- A Type II error is seeing a small effect and concluding from this that the effect is so trivial that it is some random hiccup, when, in fact, the effect, though small is real and important.
A scientist or a manager needs to accept that, no matter what you do, you may make the wrong decision. You may gett all excited over nothing (maybe wasting resources to deal with it) or you may dismiss something critically important (and causing great losses by failing to catch it). Since you can't know ahead of time whether you are making a mistake and, if so, which kind of mistake, what you do is imagine the worst possible consequences of either mistake. Then, set up decision rules that reduce the chance of committing the more serious mistake (while accepting that this generally means raising the chance of making the less serious mistake).
- For most scientists, the worst error is usually the Type I error, deluding themselves into thinking they've made some big discovery when they haven't, so they tend to prefer setting a very high standard for rejecting the "nothin' but randomness" hypothesis. This is usually the 95% confidence level or even the 99% confidence level (so that you have only a 5% (0.05) or even a 1% (0.01) chance of making a Type I mistake).
- For many businesses, the worst error in, say, marketing, may be the Type II error, failing to see some effect that might represent an exploitable business opportunity. They commonly use slacker standards than most scientists do, perhaps the 90% confidence level or even the 80% confidence level: They can live with a 10% (0.10) or even 20% (0.20) chance of making a Type I error, because the consequences of a Type II error are more serious for them.
- There is a further "plot complication" in using probability value cutoffs (e.g., 0.05) as decision rules: It is possible to have a highly significant (very low prob-value) difference or association even if the effect size is trivial. You might have a correlation coëfficient of 0.20 (meaning an effect size of 0.04) that turns out "highly significant" (prob-value of 0.001) if you have a very large sample! So, you have to be very careful in assessing what happened during a statistical test. Check not only the prob-value (or confidence level) but also the effect size and the sample size. A trivial effect can seem "significant" with a large enough sample, and a very large effect size can be ruled not significant (large prob-value) in an underpowered small sample. It's best to think this issue out before you develop your sample for a study (or thesis): You can run scenarios about how big a sample would have to be to detect a given effect size at your chosen prob-value cutoff.
As an aside, most American citizens run into this Type I/Type II dilemma during jury duty. As a juror, you will never really know if the defendant committed the crime or not. The only people who truly know are those who were there (and survived the incident), but perpretrators, victims, witnesses, and investigators may lie or even have honestly mistaken perceptions and memories.
- As a juror, you will still have to make a decision about guilt or innocence anyhow:
- You may make the right decision, deeming a guilty party guilty or an innocent party innocent.
- You may, however, make the Type I error of being convinced "beyond reasonable doubt" that the defendant is guilty, when, in fact, s/he is innocent.
- You may make the Type II error of being convinced the defendant is innocent and vote to acquit a guilty party, who may then go forth to commit another crime or even kill someone.
- The American system of jurisprudence demands the highest standard to avoid a Type I error, particularly in a criminal trial and most especially in a capital trial. In this sense, the spirit of American law is similar to that of science.
- The American system is explicitly designed to allow the acquittal of ten guilty parties rather than allow the execution of a single innocent party.
- This bias against the Type I error over the Type II error is often pretty exasperating to many citizens (and, indubitably, many people in law enforcement), who are frustrated with crime and want to "throw the book" at someone, but the bias against a Type I error is fundamental to the American system and lies right at the root of the Founding Fathers' concern to limit the power of the State over life and death.
Okay, so, with this background on how risk assessment science is done and how scientists go about decision-making, let's bring risk management into the picture.
- Risk assessors have to communicate their understanding of a hazardous situation to someone who can make decisions or policy to deal with it.
- In other words, risk assessment science is junior to risk management in any complex bureaucracy.
- Risk management decision-making should, ideally, be informed by and incorporate the most current scientific understanding. But that relationship between assessment and management is loaded with "plot complications" above and beyond the difficulty involved in conveying the uncertainties around scientific analyses.
Risk decision-making has its own duality to deal with. In a risk situation surrounded with uncertainties, managers will have a characteristic bent:
- the precautionary principle: This is a guiding principle that, if there is uncertainty but a significant chance that human life or health could be lost, you should err on the side of caution. This could exact opportunity costs, as when a promising new technology can't be deployed out of concern about its safety, with economic losses as a result.
- the de minimis principle: This is a belief that life is a risky affair, and some risks have to be tolerated in order to experience some social good or avoid an absurd level of expenditures. The focus, then, is on the idea of acceptable risk, of finding that minimum point below which a risk becomes tolerable, an acceptable trade-off for economic growth, jobs, or the enjoyment of a new technology.
- A consistent moral argument can be made for either position, and the moral argument often reflects a political leaning.
- Politically conservative managers tend to favor the de minimis principle and see the precautionary principle as government meddling in the economy. They tend to see a Type I error as even worse than the scientific community does because of the costs involved in regulation. So, they will demand virtually perfect certainty about something being harmful before they will accept the necessity of regulation. Of course, while you're waiting for all the i's to be dotted and all the t's to be crossed, a process may be ramping up so badly that, past a certain point, you may no longer be able to do anything about it (particularly if there is a lag between a mitigation action and an actual reduction in hazard)
- Politically liberal managers tend to favor the precautionary principle and feel that their duty is to protect the citizenry even if that might impose economic opportunity costs. They are far more concerned about the consequences of failing to detect a significant effect and taking effective action against it than about the economic costs of such response. Given that human life is at stake, they are willing to act at lower standards of certainty.
- With this framework, the whole conflict between climate change scientists and the climate change deniers clicks into place (as well as similar debates about whether to limit exposure to one or another toxin or whether to require more stringent building codes to cope with earthquake, hurricane, or fire hazards).
- If humans are contributing to climate warming through the release of greenhouse gasses, it could be extremely expensive to reduce the production of those gasses and those costs could delay economic development for a lot of the human population. If we delay applying the brakes, however, planetary changes may be so drastic that these economic opportunity costs will look like pocket change.
- There are, also, built-in lags in the planet's climate systems (such as the time it would take for carbon dioxide surplusses to be drawn back out of the atmosphere) and there may be critical tipping points that, once crossed, can't be uncrossed, thrusting us into a very different dynamic equilibrium. The longer we wait, the longer it will take for the system to return to the equilibrium to which our species is adapted and the higher the risk that the system will tip into an equilibrium we may find intolerable.
So, there may be a real difference in tolerance of uncertainty and acceptance of risk between risk assessment scientists and the risk managers they are attempting to communicate with. But that's not the only plot complication. Risk managers are embedded in bureaucratic power structures that impose constraints on their freedom of action.
- Risk managers may have a lot on their plates, juggling all kinds of incompatible issues and demands, some of which are more urgent than important in the bigger scheme of things.
- They might feel pressure from the known biases of THEIR own bosses.
- There may be managerial pressures coming down on a manager from sources outside the organization (perhaps Congress or the White House or a state legislature and governor or the mayor's or city manager's office -- or the voting public affecting them -- or lobbyists or shareholders).
- In a manner of speaking, managers, especially in government agencies, are trying to deal with their own Type I and Type II dilemmas, affecting their careers.
- Do they manage a perceived risk that doesn't really pose much of a hazard -- and waste budget on it -- or alienate constituent groups of voting and tax-paying citizens?
- Do they ignore an issue, thinking there isn't much substance to it, and then have it blow up horribly and get people killed or create terrible environmental damage (e.g., BP)?
- So, a lot might be going on when a risk assessor tries to bring an issue to the attention of a manager, and this may dilute the perceived urgency of the message.
Dr. Rodrigue's Home | Geography Home | EMER Home | ES&P Home
BeachBoard | CSULB Home | Library | Bookstore
Document maintained by Dr. Rodrigue
Last revision: 06/06/16