Monday, 19 September 2016

THE ROLE OF SCIENCE IN PUBLIC POLICY

THE ROLE OF SCIENCE IN PUBLIC POLICY: HIGHER REASON, OR REASON FOR HIRE?

ABSTRACT. The traditional vision of the role science should play in policy making is of a two stage process of scientists first finding out the facts, and then policy makers making a decision about what to do about them. We argue that this two stage process is a fiction and that a distinction must be drawn between pure science and science in the service of public policy. When science is transferred into the policy realm, its claims to truth get undermined because we must abandon the open-ended nature of scientific inquiry. When we move from the sphere of science to the sphere of policy, we pick an arbitrary point in the open-ended scientific process, and ask our experts to give us the answer. The choice of the endpoint, however, must always be arbitrary and determined by non-scientific factors. Thus, the two stages in the model of first finding
the facts, and then making a decision about what to do, cannot be clearly separated. The second stage clearly affects the first. This conclusion will have implications about existing scientific policy institutions. For example, we advocate that the environmental assessment process be radically overhauled, or perhaps even let go. It will be our position that ultimately a better model for the involvement of scientists in public policy debates is that of being participants in particular interest groups (‘‘hired guns’’), rather than as supposedly unbiased consultants to decisionmakers.
KEY WORDS: environment, environmental assessment, philosophy, public participation, public policy, science
1. INTRODUCTION
Recent events have demonstrated that the question of the proper role of scientists in the process of political decision-making continues to be controversial. For example, endangered species legislation in Canada was considered unacceptable by many environmental groups because the final decision of whether a species was at risk was left to politicians rather than scientists. We have heard spokespersons from the beef industry call press conferences to complain that the mad cow scare is merely a political problem, and that ‘‘if we followed the science’’ then the trade in cattle across borders would not be hindered. The SARS crisis invoked similar concerns. Health professionals, politicians, and tourism spokespersons in affected countries argued that ‘‘the science’’ clearly informed us that it was safe to visit their countries.
We will argue that the sentiments illustrated in these examples are an expression of a naı¨ve hope that science can deliver a conclusive solution to our knotty policy problems. The hope is that after much input and discussion from various sources, the scientists will enter the fray and cut the Gordian knot of policy difficulties. Scientists would deliver the final word on a policy about SARS, or mad cow, or whatever. These claims share the view that science is capable of resolving difficult issues because it appeals to reason and truth, and is supposedly free from political bias. This is the technocratic vision where decisions are best left up to the experts. This assumption underlies many of political institutions such as Environmental Impact Assessments (EA) and Species at Risk Acts. As is well-known, however, this vision of science is too idealized. As many recent philosophers and political scientists have pointed out, controversies cannot be solved so easily because scientists, too, disagree. For example, Daniel Sarewitz describes the ‘‘myth of authoritativeness,’’ which is the illusion that science can somehow provide a ‘‘rationally best solution’’ to political controversies (Sarewitz, 1996, pp. 71–96). Therefore, these modern critics argue that science cannot and should not make what are essentially political decisions, but that they should still try to be unbiased ‘‘third party’’ contributors to political debates. Post-modern critics advance another line of objection by arguing that there can be no such thing as empirical truth at all, and therefore that scientists can provide only one more perspective of no greater value than any other. We will be siding with those who, like Sarewitz, still value scientific input, yet reject the notion that it can decisively resolve value conflicts. Our case is not based in arguments for the relativity of truth, or the marginality of science. Instead, we will argue that when scientific claims enter the public policy realm, they must always do so in concert with value claims. Using EA as an illustrative case, we will show how such institutions are the product of the technocratic vision and the hope for a ‘‘technological fix.’’ However, our analysis is meant to apply more widely to any institution set up to provide supposedly unbiased information meant to help resolve political controversies related to scientific issues. Such institutions include scientific advisory panels on bioethics, species at risk, and technological assessment. Instead, scientists must involve themselves directly in more traditional democratic political processes, which is where policy decisions should be decided anyway.
2. SCIENCE-BASED POLICY: THE TECHNOCRATIC VISION
The hope for an objective solution to public policy debates, such as the ones mentioned in the introduction, seems a reasonable concern. Naomi Oreskes also notes this commonly held belief when she writes that ‘‘we all want our views to be based on truth, and many of us look to science to provide truth’’ (Oreskes, 2004, p. 369). If we do not base our public policy decisions, at least in part, on objective science, then it is feared that our decisions will be open to the whims of political fashion instead of being grounded in reality. Consequently, many argue as Chris Mooney does for science-based decision-making, where science is held up as ‘‘an impartial scientific source,’’ or ‘‘an expert bi-partisan arbiter’’ (Mooney, 2005, p. 48). What the three debates mentioned in the opening paragraph of this essay have in common is an assumption that ‘‘the practical decision context is predominantly one in which science is an important and possibly decisive...base for policy and regulation’’ (Doern and Reed, 2000, p. 5). That is to say, they hold out hope for a ‘‘science-based’’ resolution to these kinds of problems. Richard Lewontin writes that ‘‘Scientists truly believe that except for the unwanted intrusion of ignorant politicians, science is above the social fray’’ (Lewontin, 1991, p. 8). At the core of such an idealized picture of science lies the belief in a universal truth that can be reached through a common scientific method and expressed in a common scientific language. The entrenched skepticism of scientists will lead them to be epistemologically conservative and never make statements beyond what the evidence warrants, while the international nature of scientific methods of verification and systems of peer review act to prevent and remove any biases. On this view, the origin of an idea, or what motives one might have for wanting it to be true, cannot have any final influence on whether that idea actually is true. Experiments and observation determine that. An idea is accepted as likely to be true if it actually works to improve explanatory and/or predictive power.
This conception of scientists being above the social fray and existing somehow outside of the political world is a longstanding one, and is portrayed in Francis Bacons scientific utopia New Atlantis (1942). In this 17th century utopia, scientists enjoy a privileged status. They live apart from the rest of the society and interact with the public very little except for brief episodes when they break from their isolated lives as researchers in ‘‘Solomons House’’ to deliver truths to the masses with much ceremonial fanfare (p. 258). These episodic forays into the public sphere serve to emphasize the special nature of the sciences as distinct from everyday political and civil discourse. In this influential portrait of science, Bacon describes science as an apolitical tool that provides objective information to allow politicians to resolve their political disagreements. Science itself is free of all these nonobjective matters. Bacon writes that:
In contrast with civil business it [science] never harmed any man ... Its blessing and reward is without ruin, wrong or wretchedness. For light is in itself pure and innocent; it may be wrongly used, but cannot in its nature be defiled. (Bacon, 1966, p. 92)
3. HIGHER REASON: THE TWO-STAGE MODEL OF POLICYMAKING
The technocratic view is conceptually flawed. Although science can inform policy-makers, it cannot make the decisions, because the decisions are essentially ethical and political, not factual. For example, science can help me choose between two rival hypotheses about the facts, such as whether high cancer rates are caused by pollution or by lifestyle. However, it cannot provide a way for me to decide whether the risks of pollution are worth the trade for what we get in return; namely, high paying jobs and cheap transportation. Science can help me decide whether global warming is really occurring and what its possible causes are, but it cannot help me decide whether the distribution of risks and benefits derived from its causes are fair or unfair. Scientific claims, if they are to have any meaning in policy debates, must always be coupled with appropriate evaluative claims in order to construct arguments for actions for policy makers to consider. Hence, this is a two-stage and sequential process. First, the scientists present the facts to politicians. Next, the politicians determine the significance of these facts for relevant political controversies and make a decision. Roger Pielke, Jr. calls this the ‘‘linear model–get the facts right, then act’’(Pielke, Jr., 2004, p. 406). Once science provides statements of fact, policy makers must combine these statements with relevant evaluative statements in order to make reasonable decisions. This two stage view is also described by Philip Kitcher and Sheila Jasanoff, who remark that science and democracy are ‘‘objectively dissimilar,’’ (Kitcher, 2001, p. 44) in that the latter is about a ‘‘form of rule’’ and the former a quest for ‘‘truths about the natural world’’ (Jasanoff, 2004, p. 150). But if these two stages, or epistemological realms, are in fact separate, then one must consider how they can interact. As Jasanoff poses the challenge, ‘‘Though science may be all about finding the truth, it takes democracy to make sure those truths are also significant’’ (p. 152). The question is how scientists should interact with evaluators. Should they still conceive of themselves as a distinct group who can act as third party interveners between competing evaluative positions?
4. MANDATED SCIENCE
However, the entire vision of a two-stage process is highly questionable. There is a distinction that needs to be made between pure science and science recruited for political debate. These two realms are distinct, and have been labeled as ‘‘curiosity-driven science’’ (Jasanoff, 2004), and ‘‘mandated science’’ (Salter, 1988), respectively. Referencing Jasanoff, Mooney writes that ‘‘we shouldnt confuse peer review of curiosity-driven science to be published in a journal with peer review of science used to make a government regulatory decision’’ (Mooney, 2005, p. 148).
Because of this distinction, scientific input into policy decision-making cannot simply be sequential process of first finding out the value-neutral facts, and then making political decisions involving values. The actual nature of policy decision-making is such that the two-stage process is a fiction. This procedure of first finding out the facts and then making a decision about what to do about them only works if we assume that we can find all the relevant facts, and this assumes that we have all the time in the world. Curiosity-driven science is an open-ended process, and, as has been well demonstrated by historians and sociologists of science, science can retreat from positions once firmly entrenched. In the long run, though, the traditional view of science has it that it will eventually weed out all the mistakes and arrive at empirical truths. This takes time, however. As Jack Stern notes ‘‘Sometimes ... it will take decades of scientific squabbling.’’ (qtd. in Chen, 2006, p. 67).
Policy decision-making, however, does not operate in the same realm as pure, curiosity-driven science. When science gets transported into the policy realm, its claims to truth can often be undermined because scientists no longer have all the time in the world, but must provide definitive statements on issues determined to be relevant by decision-makers. Decisions must be made and not postponed until absolute scientific consensus has been reached, and thus, scientific input to contentious policy debates must be solicited in the here and now. Hence the term ‘‘mandated science.’’ When science is called upon to serve political or legal ends, it must adapt itself to the requirements of the political process, which requires decisive judgment. Liora Salter elaborates this point in her book Mandated Science (1988). Expert scientific testimony is sought out by those on all sides of a political position. Environmentalists, government, and private industry typically enlist their own scientific experts to provide support for or against a certain policy initiative. Mandated science is this expert scientific testimony used in policy making. The testimony of scientific experts is dissimilar to their normal scientific discourse because when science is transferred into the policy arena, it is deliberately shaped in ways that will facilitate the reaching of clear conclusions and policy recommendations. This shaping determines which empirical facts are useful, which scientists are solicited to provide those facts, and how those facts are expressed to make an argumentative case for an evaluative conclusion. For example, she points out that conventional science cannot often supply clear answers to the questions posed by policy-makers. This is because of well-respected limits to scientific certainty, and the open-ended nature of scientific inquiry. Mandated science, however, must provide clear answers. This clarification comes not from further science, but from non-scientific judgments. When setting policy to protect against lead pollution, Salter cites as an example that it must be decided whether to measure ambient levels of lead pollution in the environment, or exposure levels to particular people. This is not a strictly objective decision and involves assumptions that are most likely driven by background value judgments.
Another interesting example of how this characteristic of mandated science functions is given by K.S. Shrader-Frechette in Risk and Rationality (Shrader-Frechette, 1991, pp. 63–65). In the 1970s, the Canadian Atomic Energy Control Board commissioned a study to compare the risks between conventional energy sources such as nuclear and coal, and non-conventional systems, such as solar and wind energy. Herbert Inhaber reported that the risks from solar and wind were greater than those from nuclear. This surprising conclusion was reached, argues Shrader-Frechette, because some very questionable assumptions were made. For example, by not accounting for energy production that is not of ‘‘utility grid quality,’’ Inhaber overlooked the potentially enormous contributions of solar energy for space heating. Inhaber also assumed that all non-conventional energy systems must be backed up by conventional systems. The result of this assumption was that the greater part of the risk assessed to non-conventional energy systems was actually derived from their conventional energy back-up systems. Shrader-Frechette writes that ‘‘virtually every assumption he made in estimating and evaluating alternative risks had the effect of increasing his alleged non-conventional risks and decreasing his alleged conventional risks’’ (Shrader-Frechette, 1991, p. 64). This example illustrates how it is that scientists can support any side of a policy debate. What appears to be an objective assessment of risks is actually, and inescapably, riddled with value judgments. None of these value judgments are necessarily unreasonable, yet different value judgments will result in different interpretations of the facts. Both sides of this debate might be equipped with ‘‘good science’’ but the difference in their background value assumptions will be what leads to contradictory understandings. The disagreement here is about the values assumed, such as the need for conventional energy back-up systems, or the need for ‘‘utility grid quality’’ energy. Evaluative assumptions like these are unavoidable because all policy decisions require judgments about which questions to investigate, which procedures to follow, when to end the investigation, etc.
Mandated science also differs from ordinary science in its handling of the problem of uncertainty. When a scientific expert presents research to a nonscientific community, he or she must be aware of how legal standards of proof and certainty differ from scientific standards, and must alter their comments accordingly. To do otherwise will generate misunderstanding. The scientist must present their results in a way that is somehow acceptable to both the community of scientists and the community of lawyers and policy personnel. Yet these two spheres have very different languages and functions. To do this straddling, scientists end up speaking in an ‘‘opaque’’ language that hides and glosses over the differences between them (Salter, 1988, p. 8). The movement between concepts of scientific certainty and legal ideas of certainty is only one characteristic of mandated science that Salter identifies.
It seems, then, that mandated scientists lead double lives. They must appeal to the objectivity of ideal curiosity-driven science. This is how they justify their special role in policy-making (Salter, 1988, p. 5). Appeal to the ideal lends credibility to the science by leaning on its reputation for cool rationality and impartial objectivity. Scientists must act as if the moral, legal, and other non-scientific implications of how their work gets used are irrelevant to the work itself. The experts are ‘‘expected to act as if they were neutral arbiters’’ (Salter, 1988, p. 9). According to Salter, this is merely an act, however, because in practice, mandated science does not, and cannot, conform to this ideal if it is to fulfill its function of providing concrete answers to evaluative problems. Scientists engaged as expert witnesses are cognizant of the fact that ‘‘any statement of scientific issues is inherently also a statement which privileges some interests and not others’’ (Salter, 1988, p. 9). This split personality of mandated science, therefore, indicates that the two-stage model described above, which separates science and policy into two distinct spheres that operate sequentially, does not represent the reality. While science and policy-making might be different jobs, Salter argues, they cannot be disentangled. The debates in mandated science are usually ‘‘mixed disputes’’ including both truth-seeking and justice-seeking components (Salter, 1988, p. 202).
5. SCIENCE IS MANY
This discussion of mandated science reveals that the two stage process described above is indeed a fiction. Science in the mandated realm does not, and cannot provide a complete picture of the facts, nor a decisive insight into the facts that will allow policy makers to resolve controversies about science. When mandated scientists ‘‘access’’ the curiosity-driven science data bank, they must select and assemble facts in ways that will serve policy makers. We will describe three suggested mechanisms of how science can seemingly support numerous and incompatible policy positions, and thus serve as a ‘‘hired gun’’ in the service of numerous parties. Collingridge and Reeve have noticed this promiscuous quality of science when they argue that the necessary engagement with values by scientists in the mandated realm narrows their focus when seeking evidence in support of their evaluative goals. Sarewitz similarly observes that the values of the mandated science realm sometimes result from differences in the perspectives of disparate scientific disciplines. Scientistscan takedataand‘‘legitimately assembleandinterpretitindifferent ways to yield competing views of the issue (Sarewitz, 2006, p. 105). We provide a third explanation. When one moves from the sphere of curiosity-driven science to the sphere of mandated science, scientists must pick an end-point in the open-ended scientific process, and supply an answer that, it is hoped, will decisivelyresolvetheissueinquestion.Suchdecisionsmusteitherbearbitrary or based in values, and given this choice, we believe that it is legitimate in the mandated sphere to engage values on such questions.
Collingridge and Reeve argue that it is impossible to expect scientific experts to reach consensus on any ‘‘mixed disputes,’’ like those described by Salter, so it is unwise to rely on science in public-policy making. They argue that ‘‘rather than science being a natural servant to the needs of policy, there is a fundamental antagonism between them, relevance to policy effectively destroying the conditions under which technical consensus may be expected’’ (Collingridge and Reeve, 1986, p. 32). This is because policy decisions, unlike purely curiosity-driven scientific research, involve very high political stakes or ‘‘error costs’’ as they put it. Therefore, scientific experts involved in policy disputes must ‘‘do their best to provide technical evidence in its support, but then any influence they may have is often cancelled out by a rival group of experts proclaiming evidence for the contrary course of action’’ (p. 26). The inexorable result of scientists engaging with values, thus, can be ‘‘endless technical bickering’’ (p. 6).
Sarewitz argues that the source of this endless bickering is that science is inherently fractured along disciplinary lines. In any policy debate, various findings from different scientific disciplines will inevitably serve the argumentative interests of one specific party better than those of other parties, and thus science can serve incompatible political objectives. ‘‘Put simply,’’ he writes, ‘‘for a given value-based position in an environmental controversy, it is often possible to compile a supporting set of scientifically legitimated facts’’ (Sarewitz, 2004, p. 389). His explanation of the origin of this characteristic of science is based in what he calls an ‘‘excess of objectivity,’’ which basically suggests that science represents such a ‘‘huge body of knowledge’’ that its ‘‘components can be legitimately assembled in different ways to yield competing views of the problem’’ (Sarewitz, 2004, p. 389). He suggests that this characteristic might be a result of inherent disciplinary ‘‘lenses’’ or ‘‘intellectual frameworks’’ that allow ‘‘a scientist to understand some slice of the world’’ in a way ‘‘related to the values that person holds’’ (Sarewitz, 2006, p. 105).
We provide another explanation for the ‘‘endless bickering’’ that Collingridge and Reeve describe. We argue that when scientists are called upon to make contributions to policy decisions, they must choose end-points to scientific research that must be determined either arbitrarily or by evaluative claims. Thus, the two stages in the model of first finding the facts, and then making a decision about what to do, cannot be separated. The first stage is, in fact, always dependent on the second stage; that is, the determination of the facts for mandated science is always conditioned by the goals of competing evaluative positions.
6. THE PROBLEM OF END-POINTS
If fact claims must always be combined with appropriate evaluative claims in order to contribute to rational evaluative policy debate, then there is a unique kind of bias that unavoidably enters into the presentation of findings by scientists, even if those scientists are completely objective in their scientific work. Namely, scientists must inevitably make value judgments about what constitutes an appropriate end-point to the study of complex issues. The decision to study one area of interest rather than another, or to decide that your research has reached an appropriate end-point, is a non-scientific decision. It is not just a question of the direction of science, but a question of how far to pursue a line of inquiry.
In any public debate, various scientific findings will inevitably serve the argumentative interests of a specific party better than those of other parties. However, in the absence of a thoroughly exhaustive scientific analysis, one can never be sure that all appropriate avenues of scientific research regarding the issue in question have been properly pursued. When one is engaged in mandated science, one can never avoid skewing the debate in favor of a particular side, because the judgment of where to stop ones line of inquiry must be either chosen arbitrarily, or chosen according to ones ethical or political intuitions.
Consider this practical example of how scientific input can inadvertently skew a public debate. In the debate about global warming, opponents of the position that humans are causing a potentially dangerous change in the global temperature of the planet often make reference to negative feedback loops that might mitigate any warming. For example, an increased albedo effect – the reflection from ice and cloud cover – could potentially offset the effects of increasing CO2 in the atmosphere. Such research could be done in a properly scientific fashion. It could make the claim to represent essentially objective knowledge about the world. And yet, in the absence of an exhaustive scientific analysis of the situation, which might provide countervailing information supporting the pro-global warming side of the public debate, such findings might inadvertently arm one side of the issue. There is just no scientific principle for ensuring that a complete analysis of a complex situation has been achieved. It is not that the science about the albedo effect is bad science. The results of such research could well be beyond scientific reproach. Nor is our point that such science might be the result of biasing influences, for example, that the research might have been sponsored by the oil interests. Our point is that for any policy issue, there will be many other possible relevant scientific findings that could profoundly alter ones conclusions.
For the sake of argument, let us assume that the global warming debate might hinge on three specific fact claims:
An increase in CO2, in the absence of countervailing factors, will cause warming.
That the albedo effect is a countervailing factor that will offset the increase in CO2.
Some yet unknown or yet to be well established factor will offset the albedo effect.
Because science is by its nature a process of on-going discovery, such a basic framework will always be a possibility for any policy issue:
Some fact claim conclusively supports one side of an issue.
Some fact claim puts this claim into serious doubt.
Some yet unknown or yet to be well established claim puts this doubtinto serious doubt.
Now this process is nothing more than the representation of the open-ended natureofscienceitself.Asnotedaboveinthesectiononmandatedscience,the problemisthatwhenthescientificmethod entersintoapublicpolicydecisionmaking process, then this significantly changes the situation. It is fine for a curiosity-driven scientist to remain epistemologically open minded, but in policy decision-making processes conclusive statements must be made.
It is not unreasonable to ask that professional scientists should be properly aware of their critics and even able to present in a reasonably unbiased way views that contradict their own. For example, someone asserting claim (1) should, as a scientist, be aware of someone asserting claim (2), and vice versa. It is not unreasonable to assume that professional scientists will be the best equipped in terms of such awareness and with a sense of obligation to be objective in presenting their overview of relevant findings. Lewontin gives voice to this basic scientific principle when he paraphrases the famous biologist Theodosius Dobzhansky: ‘‘the obligation to speak the truth about science was superior to all other obligations and that a scientist must never allow political considerations to prevent him from saying what he believes to be true’’ (Lewontin, 1991, p. 8). Scientists are supposed to try to be objective. But whether scientists are truly capable, either in practical or theoretical terms, of such objectivity is not the problem. The issue in a policy debate is whether we can have any confidence about where we are in such a process. Such a process will always have some practical end-point. In the case of our schema, the practical end-point is claim (2). The ethical issue that this fundamental epistemological predicament represents is whether scientists as a whole have any special claim to know that the process has properly ended at (2), as opposed to some other point of criticism, including those that are yet unknown or still highly uncertain. A professional scientist working for the proponents of an environmentally sensitive project might properly fulfill all scientific duties by reporting the state of the scientific debate of which he or she is aware, say to point (2). Such a scientist might have no information to indicate that there are any other plausible or likely iterations of the critical process. In good conscience he or she, whether a proponent of the side supported by claim (1) or claim (2), might report the current state of the science relevant to the debate.
Yet, as anyone can guess, the issue might hinge on some completely unknown or yet to be well established criticism of any of the proceeding claims, some claim (3), that might radically change the entire complexion of the debate. The question we must consider is who should be responsible for making judgments that a proper overview of relevant scientific findings has been achieved? At the end of the day, we have to be prepared to ask about the values that have suggested a specific end-point to the overview of the relevant curiosity-driven science that will be presented in the mandated science realm. Has the proper amount of science been done? Have the right questions been asked? What has motivated the scientists in their course of discovery? There can only be two answers. Either it is what Philip Kitcher calls ‘‘the serendipity of discovery’’ (Kitcher, 2001, p. 141), in which case it is the curiosity of the scientists involved; or it is some value judgment by these scientists. But in either case, the investigation does not independently reach some objective end-point. End-points are chosen.
We are faced with a fundamental dilemma. In the mandated realm, scientists can either be unbiased, neutral arbiters or hired-gun contributors to one side of the debate, but they cannot be both at the same time. If serendipity has been what has determined an end point, then scientists can be considered to be unbiased parties to a dispute, but there can be no objective way of knowing whether their contribution should be considered decisive. If, on the other hand, it has been values that have determined the end-point, then scientists can no longer make claim to neutral, unbiased status.
Kitchers examination of the history of U.S. national health policy and how it has been beset by ‘‘conflicting criteria for judging the relative benefits of possible outcomes’’ provides an excellent example of the kinds of conflicts that emerge from this basic dichotomy (p. 143). Asked to focus on ‘‘public health needs’’ different committees provided different policies based on different conceptions of what constituted ‘‘basic health,’’ as well as the importance and scope that should be granted to scientists simply pursuing their own ‘‘elitist’’ scientific research interests.
In the face of this inescapable arbitrariness in choosing where to end the open-ended process of scientific research, how are policy makers to determine whether the findings are biased in favor of some particular side of a policy issue? They must obviously be aware of the possibility that certain evaluative positions may have supported contributing research, and that this must be taken into consideration in offsetting such contributions in various ways. But what of the research that has not been pursued for specific value reasons? In other words, what of the research that has been pursued for purely serendipitous reasons? Such research could also, inadvertently, bias the debate. But there can be no obvious objective way to even begin to seek to offset such a possibility. Scientists presenting their findings to decisionmakers will always be prey to the possibility that they can simply inadvertently favor certain evaluative positions and not others. The decision to research one area rather than another is always in some sense arbitrary. Such a possibility cannot be ignored in public policy consideration.
To sum up, there is no clear two stage process of first finding out valueneutral facts and then using these facts to help resolve a conflict between evaluative positions. Not only have values had to guide the specific kinds of research that have been pursued, but specific values must be applied to determining the end-point of a study. And these are not just any values – they are values that define the competing positions on the policy question that is under consideration. In other words, the so-called second stage must either determine the first, or the first stage, if based in pure serendipity, has no basis to present itself as a helpful third-party arbiter. So why grant scientists a special role as ‘‘objective’’ contributors to a public policy debate who supposedly can help dispassionately resolve the conflicts between competing evaluative positions?
7. THE IMPLICATION THAT ‘‘SCIENCE IS MANY’’
What are the practical implications of Salters concept of mandated science? It seems that her distinction between science and mandated science means that scientists involved in policy-making can no longer be seen as objective Gordian-knot breakers. This is not because of post-modern concerns that science itself can never be objective, but because science in the service of policy is transformed, in the ways described above, so that it is no longer simply about empirical matters.
Does this mean that science has no legitimate role to play in policymaking? No. What it means is that science cannot be expected to play the lead role. To illustrate our point we will borrow some terminology developed by Yaron Ezrahi. Ezrahi argues that the utopian vision of science as a ‘‘depoliticized’’ sphere of reason, that could serve as a unifying bridge between opposing positions, can no longer be seriously entertained (Ezrahi, 1984, pp. 275–279). While it might still be a ‘‘politically useful fantasy’’ to appeal to the unifying vision of science, we can no longer appeal with confidence to lofty visions of scientific truth that would compel agreement amongst all reasonable persons. However, rather than taking this as a justification for rejecting any role for science in policy, he argues that scientific experts have a role to play in public affairs at the ‘‘micro-level’’ (p. 276).
Just as Salter still finds a role for science in ‘‘mandated science,’’ so too does Ezrahi find a role for scientists in practical decision-making at the micro-level. He does this by drawing an analogy with Robert Nozicks political ‘‘multitude of micro-utopias’’ that, in our modern political discussions, have replaced the quest for a single Utopia based in a grandunifying-vision (p. 281). Few can still seriously appeal, for example, to monolithic political Utopias such as that of Marxism, that erroneously treat humanity as a single, undivided body. However, it is still thought possible for some framework of politics to serve many different visions of utopia. The utopian vision of science should be thought of in the same way as political utopias, argues Ezrahi, for ‘‘science is often enlisted ... by many different micro-utopias’’ (p. 283) representing a ‘‘multiplicity of interpretative standpoints’’ (p. 284). Science can no longer be thought of as a single, undivided body of thought. However, this does not eliminate the need for science because a multitude of public interest groups each can find the science needed in support of their positions.
We are in agreement with the practical implications of these visions, in that we conclude that the proper role of science in our current policy decisions should not be that of the grand-unifying-vision, but rather, that scientists should enter the policy arena at the micro-level as ‘‘activist scientists’’ participating in the work of competing evaluative positions. For example, in a recent debate in Ontario, Canada about the construction of a new highway through a natural area, one finds many conflicting scientific comments about the health risks of tree removal. Research findings published in the journal Science, claim that if levels of air pollution rise, then the incidence of genetic mutations in mice will double (Somers et al., 2004, pp. 1008–1010). One of the authors of this article, Jim Quinn, weighed in on the debate surrounding the building of the new highway by claiming that ‘‘the findings should be considered by planners trying to locate highways or whether to remove or plant trees in a city setting’’ (Morrison, 2004, p. A9). In response, ‘‘Chris Murray, acting director of the Red Hill Valley [highway project], said a 2003 health effects report found no adverse effects due to the loss of trees in the valley’’ (qtd in Morrison, 2004, p. A9). Clearly the public is faced with many such instances of conflicting policy opinions between professional scientists. It is time to do away with policy instruments that can falsely present themselves as the ‘‘one legitimate voice of science’’ acting in the public interest.
We see now that there are good reasons for rejecting the belief that scientists should be viewed as objective contributors to public policy debates. We will argue that when it comes to the contribution of scientists to evaluative decision-making, scientific contributions must be understood as necessarily paired with specific value claims. In other words, even if one accepts that science per se can represent a certain kind of ‘‘higher reason,’’ we should fashion our policy-making tools in a way that makes it clear that science, in the mandated realm, is always also ‘‘reason for hire.’’ This treatment of scientists as advocates might have benefits analogous to Adam Smiths ‘‘hidden hand,’’ writes Kai Lee (Lee, 1993, p. 96). As part of his adaptive management theory, Lee supports a process of what he calls ‘‘policy-oriented learning,’’ which can be the ‘‘by-product of competition among policy actors, including experts, politicians, and bureaucrats, all of whom act as advocates’’ (p. 96).
This conclusion, if properly understood, has huge implications for policy tools created in the light of the technocratic vision of science and policy. For example, we will use the business of environmental assessments to illustrate how scientists can be involved in policy-making in a way that gives the misleading impression that they represent ‘‘higher reason’’ rather than
‘‘reason for hire.’’
8. ENVIRONMENTAL ASSESSMENTS
The idea of environmental assessment finds it origins in the early 1970s with the creation of the American Office of Technology Assessment (OTA). The American Technology Assessment Act, which outlined the goals of the OTA, had the purpose of ‘‘‘disentangling knotty technical issues with the aim of making Congressional debate on such complex matters more informed and rational’’ (McGinn, 1991, p. 246). The act was created after the dramatic cancellation of the U.S. program to build a supersonic airliner like the Concorde. At that time, the perception of many members of Congress was that the program had been allowed to go on too long at such great expense because legislators had been unable to properly understand the scientific and technological issues involved. The hope was that a special institution attached to Congress, the OTA, could help clarify and present complex scientific and technical matters to decision-makers who were not professional scientists or technicians. The idea of doing environmental assessments is a global extension of this basic technocratic vision.
In 1986, Collingridge and Reeve could afford to observe with only mild bemusement that ‘‘the production of vast, almost entirely unread environmental impact statements is now quite a cottage industry in America’’ (p. 3). Since then, EA has become a multi-million dollar industry (Statistics Canada, 2000). Thus, it is all the more important that we are critically aware of the nature of the connection between policy and science. We advocate that the routine use of EA processes by government be reconsidered. It will be our position that a better model for the involvement of scientists in public policy debates might be that of members of particular interest groups, rather than as supposedly unbiased consultants to decision-makers.
Yet the use of such assessments continues to expand around the world. The influential UN report, Our Common Future, mentions that ‘‘An increasing number of countries require that certain major investments be subject to an EA. A broader environmental assessment should be applied not only to products and projects, but also to policies and programmes, especially major macroeconomic, finance, and sectoral policies that induce significant impacts on the environment’’ (WCED, 1987, p. 222). In Europe ‘‘a council Directive of the European Economic Community bound all twelve member states to require an EA for specific public and private projects’’ (CCREM, 1988, p. 8). It would be impossible to review all the different forms these processes can take, but they all share certain basic similarities, which are a result of their common heritage from the OTA in the United States. The specific version of environmental assessment we will examine in more detail here is the one used in the province of Ontario, Canada (EAA, 1990).
As outlined by David Estrin and John Swaigen, an EA is a planning process mandated by a government to
identify and assess the effects that a program, plan, project, or other undertaking might have on the natural and human environment. For an assessment process to be acceptable, the assessment must be used to determine whether the undertaking should proceed and, if so, what step should be taken to reduce or mitigate the negative impacts. This planning process usually consists of carrying out a study of the program, plan, or project. The study should be comprehensive. It should identify the direct and indirect costs of an undertaking in terms of such things as environmental degradation, the use of energy and resources, and social and economic disruption, and weigh these costs against the benefits of the undertaking. (Estrin and Swaigen, 1993, p. 188)
Over the years, various forms of public input have been added to the EA process in the hopes of making the process less technocratic. Maarten Hajer has traced the history of the development of the environmental movement and legislation starting in the 1960s and posits that a fundamental change occurred in the 1980s in terms of the approach of the public and governments to the environmental crisis. Before the 1980s, the emphasis was on a ‘‘react-and-cure’’ approach, but a reaction set in as a result of ‘‘growing frustration with the perceived insensitivity of the ‘‘new class’’ of experts and technocrats to the spheres of the human life world’’ (Hajer, 1995, p. 88). As a result, after 1980, a movement Hajer calls ‘‘ecological modernization’’ took hold in industrial countries. One of the characteristics of this new approach to the environmental crisis was a ‘‘reconsideration of the existing participatory practices’’ (p. 28) and a move to ‘‘acknowledge new actors [beside technocrats], in particular environmental organizations and to a lesser extent local residents’’ (p. 29). According to Hajer, ecological modernization showed itself in ‘‘an opening up of the existing policy-making practices and the creation of new participatory practices (from regular consultancy to active funding of NGOs, from reconsideration of the procedural rules of EAs to the regular employment of round table discussions and environmental mediation’’ (p. 29).
In Ontario, EA legislation was updated in the early 1990s to reflect such desires for greater public involvement in the process of assessment. In the original Ontario Environmental Assessment Act, no place for public participation was made in the preparatory stage of an EA study. After 1991, some room for consultation about the form a study will take was added to the process. Various forms of funding for participants in the final public consultation were also added. The modifications were guided by the hope that ‘‘instead of leaving the analysis solely to industrial experts and government bureaucrats and politicians, environmental assessments ask the public about their concerns and gives them a chance to make their evaluation of the effects of a project’’ (Estrin and Swaigen, 1993, p. 190). However, according to the Estrin and Swaigen, the move to shift power ‘‘away from proponents, bureaucrats, and experts toward the affected public’’ was not without opposition. Some were concerned that increasing involvement of the public would undermine the objective quality of EA studies and their scientific authority, which prompted them to note that ‘‘this loss of authority explains one aspect of the ongoing opposition to environmental assessment’’ (p. 190). However, despite all these changes, and in response to such concerns, the core function of EA in Ontario, is still ‘‘to ensure that decisions are made following a rational and objective planning process’’ (p. 199).
Although no specific guidelines are laid down about the kinds of professionals who can undertake the studies upon which assessments must be based, in practice, only professional scientists and engineers are relied upon. ‘‘An Environmental Assessment Panel is a group of experts, usually four to six, selected on the basis of their knowledge and expertise of the project under review’’ (Government of Canada, 1980, p. 3). The basic policy for how assessments are to be done reads like a simplified statement of the scientific approach: ‘‘The ministry and the boards have said that to be acceptable, an environment, 1993, p. 199). In Ontario, the minister of the environment cannot reject an assessment because it is inadequate, but can only ‘‘order the proponent to do further study’’ (p. 201). This policy acknowledges the dynamic nature of the scientific process from which new findings can continuously emerge. However, in practice this policy encourages the submission of highly ‘‘complex scientific and planning studies’’ by proponents in order to preempt potential criticism about the adequacy of their study (p. 201).
Despite this fact, the most common complaint made by critics of EA is still that such assessments are not thorough enough. As Estrin and Swaigen note, ‘‘Two substantive weaknesses of the Act are the absence of requirements to assess either cumulative and synergistic impacts or the sustainability of the undertaking’’ (p. 211). Finally, in all the stages of when the public is allowed to make comments on the study, it has been argued by some that even greater financial support must be provided to opponents, ostensibly to offset the financial resources available to proponent organizations.
These problems and the history of the attempts to address them, are indicative of a fundamental weakness in the EA process, rather than being a sign of healthy development, as many commentators, such as Hajer, imply. Our analysis has shown how the ongoing concern with questions about how to determine the adequacy of EA processes is a manifestation of a fundamentally flawed understanding of how science can be linked with any kind of essentially evaluative decision-making process. The hope of creating public policy tools that are explicitly designed to introduce a greater degree of objectivity into policy discussion is naı¨ve hope.
Further public support and contribution to the scientific stage of the process cannot solve the problem. In seeking public input to the scientific stage of the decision-making process, we are seeking to supplement the input of environmental experts with the input of the evaluators who the scientific experts were originally meant to help. And simply adding further scientific input cannot solve the problem either because, at root, the evaluative problem of determining an end-point remains. In the end, we must face the possibility that the institutions we have created to provide a special way for scientists to help resolve public policy issues might actually be good examples of what Alan R. Drengson calls a ‘‘technological fix,’’ which is the attempt to overcome a problem that is fundamentally ethical in nature by the creation or modification of a technology. That is, they might merely be a technological means to allow ourselves to avoid questioning our values– especially values concerning our commitment to other problematic technologic means.
9. THE ENVIRONMENTAL ASSESSMENT APPROACH AS JUST A TECHNOLOGICAL FIX
Some philosophers and social critics, such as Lewis Mumford (1964), Jacques Ellul (1964), and Langdon Winner (1977), have argued that our society is spell-bound by ‘‘experts.’’ These critics suggest that this enthrallment can lead to a situation where the public can inappropriately absolve itself of responsibility for political decisions and the need for citizen participation in policy debates about problematic technologies. Our argument is that the use of the EA process can contribute to this kind of enthrallment because it can support a widely held belief that the resolution of environmental controversies is best left to scientific experts. Such a belief would be reasonable given the large amounts of money expended on EA reports and the foundational assumption behind the whole EA approach; namely, that it is primarily a lack of scientific understanding that lies behind many of our environmental failures. However, it is possible to question this basic assumption behind the use of the EA process. If these challenges are correct, then past environmental failings might have different causes than a lack of proper scientific input. If this is the case, then throwing vast resources and effort into adding to and improving upon the transmission of empirical information into decision-making processes will not only be wasteful, it will be distracting.
The belief that past failures concerning large government projects were a result of a lack of appropriate institutions, such as EA, is just one possible explanation of the environmental crisis. Many environmental philosophers, for example, are convinced that there are fundamental problems with our societys dominant ethical attitude towards nature that is the root cause for environmental problems. On the other hand, some philosophers of technology, such as Mumford, Ellul, and Winner, have suggested that technology can become inappropriately ‘‘autonomous’’ of human control. Not all philosophers believe that the problems of the environment have resulted primarily from a lack of scientific information about possible effects of specific human activities. It is beyond the scope of this paper to determine which of the perspectives on the origins of the environmental and social crises of our time are correct. It is enough to simply note that there is profound disagreement about the fundamental causes of these issues, and that an understanding of this debate would suggest that it is possible that the major environmental and social problems of our time might not have resulted primarily from a lack of proper scientific input into public policy decisions. But even if one does not advance a specific case about the nature of our societys general ethical failures concerning the environment, but simply acknowledges that ethical claims, if false, can undermine the usefulness of any relevant empirical claims in a policy argument, then scientific policy instruments like EA can be of little help.
Continued reliance on the EA process could possibly even undermine our ability to effectively address the environmental crisis. It could side-track environmental activists into complex administrative battles over the pursuit and interpretation of empirical findings, when a broader discussion of the values of society as a whole might be what is more critical in addressing a specific issue. If the criticism of those who argue that the environmental crisis is primarily a result our societys flawed ethical attitude towards nature or technology is true, then the most important kind of action that must be undertaken by people concerned about the environment is an effort at changing the fundamental values of their fellow citizens. Some might argue that Environmental Assessments at least provide an opportunity for a generally indifferent public to become aware of and informed about important environmental issues. However, this observation just begs the question about why they are so indifferent and uninformed in the first place and ignores the risk posed by the use of the EA process as a technological fix. This criticism assumes that the EA process is neutral in its effect on a society enthralled by the technocratic vision, rather than an ‘‘enabler’’ of this enthrallment. In a society still highly influenced by the technocratic vision, the public might simply find the existence of EA as a convenient excuse to believe that environmental decisions are being left in more capable hands. This is why we would argue that the most appropriate forum for the discussion of controversial technological activities are the traditional democratic ones of public meetings, public inquiries, legislatures, referenda, courts, and the media.
However, the existence of a technological fix can also provide new opportunities for the manipulation of the public. Consider this example of how easy it can be for government officials to use the assumption that science can deliver final authoritative judgments about environmental issues to seek to shut-down vexing public debate. The leader of the government of the province of Ontario, for example, recently commented on the still muchcontested proposal to build the Red Hill Valley expressway: ‘‘We believe we have had a full environmental assessment and we believe it is in the public interest to proceed’’ (Nolan, 2004). It is unclear what the premiers comment about a ‘‘full assessment’’ being done is supposed to imply. If our analysis is correct, it could only mean that he feels that at this moment in time the empirical evidence of the study just happens to support his governments evaluative position on the project. Or it could represent his determination that the empirical evidence, presented and judged as adequate for reasons compatible with the values of his government, supports his governments position on the project. This allows for a possibility of manipulation of the public, either consciously or unconsciously. The premier might be using the idea that scientists are unbiased third party arbiters, an idea still held by many in our society, as a way of seeking to discourage further legitimate debate about the values at the core of his governments position on the issue.
10. INESCAPABLE WEAKNESSES OF EA
Perhaps the greatest possible objection to our practical recommendation is that the criticisms we have made about the EA process can at best only support a search for further improvements to the process, not its elimination. In other words, perhaps a flawed, but evolving instrument is better than no instrument at all. However, it is our further contention that the supposed solutions to the main difficulties actually work to exacerbate the fundamental difficulties we have described.
The ad hoc modifications used to address the problems of the technocratic origins of the EA process, such as increased public participation in the construction of studies, and the financial support of contending interest groups in the final comment stage, have lead to an increasingly bloated process. The increasing expense of EA processes is a major concern of both professional EA experts and environment activists, although for different reasons. As Estrin and Swaigen note, ‘‘How much of the cost and delay associated with the (EA) process is caused by the need to assess impacts thoroughly and objectively and how much of it is unnecessary and can be avoided through efficient administration of the Act is a key question that must be answered to have effective reform that reduces cost and delay without resulting in superficial or manipulated assessment’’ (p. 211). Environmentalists are concerned because concerns about costs has lead the government of Ontario to adopt further ad-hoc solutions to this problem, such as cutting down on the number assessments done, seeking to do socalled ‘‘class assessments’’ (pp. 204–206), and using what in Ontario are called ‘‘focused’’ or ‘‘limited environmental assessment hearings’’ (Burman, 2004, p. A4). Because EA processes represent at least some opportunity for raising public awareness about environmental issues, environmentalists have been concerned by these trends that reduce or eliminate EA processes, and they often respond by calling for an increased use and emphasis on full EA processes. It is our position that they would perhaps do better to switch their allegiance from this policy instrument altogether to other means of influencing government policy.
The EA process started as an attempt to bring an objective vision to policy questions, as well as an attempt to make scientific findings more accessible to decision-makers. However, in regard to the first goal, of bringing an objective analysis to bear on policy questions, we have argued that there is basic inability for EA to function without proper value guidance in determining end-points. The ongoing call for increasing public input into EA processes is a reflection of this fundamental inability. However, these changes can only contribute to the pressure on proponent scientists to load their studies with as much scientific evidence as possible to support their own positions, or that of their sponsors, in order to preempt the possible counter research of opponents. Thus, the modifications crafted to respond to the inherent weakness we discuss run counter to the second goal of EA, which is to make empirical information more accessible to decisionmakers with a democratic mandate. The process becomes more protracted and expensive, which not only taxes the public, literally, but which increases the burdens of patience and stamina of both proponents and opponents. Such a development could only add to the power of the EA process to act as a distracting technological fix.
11. A BETTER PROCESS?
A better model for introducing scientific findings into public debate would be to simply let the various contending positions be responsible for finding and presenting any relevant scientific claims that can help support the case for their specific environmental public policies in traditional democratic forums. The increasing array of environmental public interest groups, who actively pursue scientific research and welcome scientific members, indicates that such a model is already being put into practice. As Ezrahi writes, ‘‘Science and technology ... are today feeding a vast network of microutopias’’ (p. 284). We believe that this new vision of scientists as welcomed participants in, and even leaders of, a multiplicity of competing environmental causes, is a positive development over the old vision of scientists as disinterested parties in Bacons ‘‘Solomons House.’’
So what does the political landscape look like without EA? What would the role of science be in public policy formation without it? First, we should make clear that calling for the abolition of EA does not mean we desire the abolition of scientific input into decision-making or policy debate. What we think should take its place is a decision-making process that neither assumes that scientific input is required to make good policy decisions about environmentally sensitive projects, nor gives any specific group of scientists special access or control over aspects of the decision making process. This is an unavoidable conclusion of acknowledging that scientists, at least within the realm of mandated science, can make no claims to being unbiased third parties. Because of the problem of end-points that we have described, they will inevitably be partisans, whether from conscious choice, or by happenstance, of some evaluative position. This conclusion does not imply that science is irrelevant to good decision-making. In fact, it implies that it can be critical for the creation of a coherent and compelling position on an issue. Thus, it would still be important for governments that are concerned about making good choices to commission scientific study to help clarify relevant scientific uncertainties concerning their policy goals. They just should not be able to imply that such study can resolve the evaluative conflicts that can arise about such goals.
Thus, the public consultation processes and scientific study processes must at least be separated and the public consultation processes must be the forum in which the scientific findings are presented. Those in control of such processes need not be scientists. This result could perhaps be most easily accomplished by simply removing the public consultation aspects that have been added to EA processes under the pretense of ‘‘ecological modernization’’ (Hajer, 1995). These ad hoc additions merely help to maintain the illusion that there is unbiased third party science being delivered for comment by evaluators. Furthermore, since it must not be assumed that science is necessarily critical for the solution of environmental policy issues, the automatic requirement of EA must also be dropped. So what we see is a return to a process of public hearings, which like EA, could have automatic triggering criteria of some sort or another, but not criteria that simply assume that scientific study must be done.
A clear example of such a kind of process can be found in the public inquiry held into the Yukon pipeline project of 1975. This inquiry is widely remembered in Canada because it resulted in the cancellation of a multimillion dollar project. In fact, when we ask students to find examples of EA that have led to the cancellation of proposed projects, this is inevitably one of the most common responses, despite the fact (or perhaps because of it) that it took place before the creation of EA legislation in Canada and thus was not actually an EA process.
Justice Berger was given a standard set of powers to hold public hearings and issue a report. His final report called for a 30 year moratorium on construction, primarily out of concern for the grievances of local native communities over land-claim issues, but also out of concern for the impact on caribou. The basis for Bergers report came from the extensive public hearing process that he undertook. He listened to natives, scientists, and other citizens without distinction.
In the absence of EA, scientists will have to become adept at the practices of influencing governments through the presentation of their views in traditional public forums. What we feel is more needed than EA is an increasing appreciation among scientists of the need to become scientific activists. The model forsuch can befoundin theactionsof Rachel Carsons warnings about the unsuspected effects of pesticides in the environment (1962). This tradition continues today in the actions of Jim Quinn commenting the Red Hill Creek expressway based on his own research. It can be found in the case of the 113 Fellows of the Royal Society of Canada who wrote to the government of Canada that its new Species at Risk Act does not provide adequate protection of endangered species habitat (Agnolin and Loverock, 2002, p. 4). The growing array of scientifically savvy advocacy groups that have arisen in the time since the development of EA, such as Pollution Watch, Environmental Defense, also indicates that with or without the changes that we recommend, scientists have begun to fill the role of activist and have begun to strongly contend the findings of ‘‘official’’ environmental assessments.
There are two other possible practical concerns that could be raised in defense of EA. First, it could be argued that EA processes provide a useful way to notify the public that environmentally sensitive activities are taking place and second, EA provides a much needed vehicle for the distribution of public funds for environmental research.
We would argue that such functions are not intrinsic aspects of the EA process. Thus they can possibly be better handled by other policy means. For example, in Canada, the recent Ontario Bill of Rights makes provision for a registry of important governmental environmental decisions. This registry requires a central, publicly accessible, database of government activities and judgments that will likely have significant environmental impact. This kind of policy tool could provide an example of a sufficient institutional response to the problem of how to seek to involve the public and scientists more actively in the discussion of environmental issues. Citizens, activists, journalists, lawyers, and politicians can peruse this directory and then prepare cases for or against such activities. The ensuing public debate must, of course, include reference to relevant scientific findings. Each interested party will find scientists and data necessary to support their position. Lawrence Busch supposes that this might be done by developing ‘‘science shops’’ where community members can submit questions to scientists (Busch, 2000, pp. 167–168). He notes that these science shops have been successful in the Netherlands and have been relatively inexpensive.
There are also important questions concerning the proper funding of curiosity driven science in our society. For our vision to work, the parties to decision-making processes must have adequate access to relevant scientific findings. This raises important equity questions about public access to, and direction of curiosity-driven scientific research, the problems of government and corporate control, etc., such as those posed by Jasanoff and others in their discussion of the ‘‘governance’’ of a ‘‘well-ordered science’’ (Jasanoff, 2004, p. 152; Middendorf and Busch, 1997). People might think that our vision could result in the loss of significant funding for environmental research in the form of the funding provided to some stake-holders to help finance their participation in the public consultation processes that have been added to EA. This source of funding might not be easily replaced by the various non-governmental organizations that we envision participating in environmental decisions. Questions about the proper financial support and direction of science are clearly important, but we would argue that the EA process is not obviously the best way of addressing these concerns about the fair distribution of resources for scientific study.
Although EA, in many jurisdictions, might currently provide a conduit for the support of existing science-based environmental study, such funding is extremely unlikely to ever become an efficient means to disperse funds to support truly competing scientific viewpoints. The level of such stake-holder funding typically only ranges up to tens of thousands of dollars at the most, and is thus not likely to provide enough funds for truly new scientific research that can contend with the resources available to governments and businesses. Perhaps the growing strength of scientifically savvy private advocacy groups could be able to generate that level of funds needed to generate the kind of new science needed to compete with groups like government and business. It would certainly be an advantage to them if our society would abandon the technocratic vision of science that supports a public perception that only government sponsored assessments can represent truly unbiased scientific input on contentious proposed activities. Thus, our vision would not worsen the distribution of power when it comes to the patronage of curiosity-driven science. Lee notes that once we abandon the concept of neutral, nonpartisan experts working in the public interest, ‘‘then the playing field is level: no one is more legitimate than anyone else, even though different institutional positions sill constitute different roles’’ (Lee, 1993, p. 96).
CONCLUSION
In the end, it is ordinary democratic decision-making processes, and not science, that should be relied upon by governments to reach environmentally sound conclusions about public policy. In these processes, scientists should only play a supporting role to contending ethical and political viewpoints. Traditional democratic decision-making processes have an inherent efficiency when it comes to the inclusion of truth-claims, including scientific claims, in the presentation of an overall argumentative position. Only those claims that can help support ones position, and that can be compellingly presented to an audience, will be sought and included. When we believe that it is better to put the scientists off in their own distinct arena, as in Bacons Solomons House, from whence they will come as a third party to decide disputes, we not only inevitably encourage the inclusion of much largely irrelevant science, but we also risk creating the illusion that it is really scientists, and not the contending evaluative interest groups, who are most critical to resolving an issue.
REFERENCES
Agnolin, J. and K. Loverock (2002), ‘‘Scientists Criticize Endangered Species Bill.’’ Alternatives, 28.1 (Winter), p. 4.
Bacon, F. (1942), ‘‘New Atlantis,’’ in G. S. Haight (ed.), Francis Bacon: Essays and New Atlantis, Roslyn, NY: Walter J. Black, Inc.
Bacon, F. (1966), ‘‘Thoughts and Conclusions,’’ in B. Farrington (ed., trans.), The Philosophy of Francis Bacon, Chicago: University of Chicago Press.
Burman, J. (2004), ‘‘Road Opposition Shelved.’’ Hamilton Spectator, July 29, A4.
Busch, L. (2000), The Eclipse of Morality: Science, Sate and Market, New York: Aldine de Gruyter.
Canadian Council of Resource and Environment Ministers (1988), Environmental Assessment in Canada. Canadian Council of Resource and Environment Ministers (CCREM).
Carson, R. (1962), Silent Spring, Boston: Houghton Mifflin.
Chen, I. (2006), ‘‘Born to Run.’’ Discover, May, pp. 63–67.
Collingridge, D. and C. Reeve. (1986), Science Speaks to Power: The Role of Experts in Policy Making, London: Frances Pinter Publishers Ltd.
Doern, B., and T. Reed. (2000), Risky Business: Canadas Changing Science-Based Policy and Regulatory Regime, Toronto: University of Toronto Press.
Drengson, A. R. (1984), ‘‘The Sacred and the Limits of the Technological Fix.’’ Zygon, 19 (September), pp. 259–275.
Ellul, J. (1964), The Technological Society, New York: Random House.
Environmental Assessment Act, RSO 1990, c. E.18.
Estrin, D. and J. Swaigen (1993), Environment on Trial: A Guide to Environmental Law and Policy, 3rd edn. Emond Montgomery and Canadian Institute for Environmental Law and Policy.
Ezrahi, Y. (1984), ‘‘Science and Utopia in Late 20th Century Pluralist Democracy,’’ in E. Mendelsohn and H. Nowotny (eds.), Nineteen Eighty-Four: Science between Utopia and Dystopia, Dordrecht: D. Reidel Publishing Company, pp. 273–290.
Government of Canada (1980), ‘‘Environmental Assessment Panels: What They Are, What They Do.’’ Environmental Assessment Review. Minister of Supply and Services Canada. Cat. No. En 105–14/1980.
Hajer, M. A. (1995), The Politics of Environmental Discourse: Ecological Modernization and the Policy Process, Oxford: Clarendon Press.
Jasanoff, S. (2004), ‘‘Essay Review of Science Truth and Democracy by Philip Kitcher: What Inquiring Minds Should Want to Know.’’ Studies in History and Philosophy of Science, 35, pp. 149–157.
Kitcher, P. (2001), Science, Truth, and Democracy, New York: Oxford University
Press.
Lee, K. N. (1993), Compass and Gyroscope, Washington, D.C: Island Press.
Leiss, W. (1994), The Domination of Nature, Montreal-Kingston: McGill-Queens University Press.
Lewontin, R. C. (1991), Biology as Ideology, Concord, Ontario: House of Anansi
Press.
McGinn, R. E. (1991), Science, Technology and Society, Englewood Cliffs, N.J: Prentice-Hall, Inc.
Middendorf, G. and L. Busch. (1997), ‘‘Inquiry for the Public Good: Democratic Participation in Agricultural Research.’’ Agriculture and Human Values, 14, pp. 45–57.
Mooney, C. (2005), The Republican War on Science, New York: Basic Books.
Morrison, S. (2004), ‘‘Bad Air May Harm Unborn.’’ Hamilton Spectator, May 14, A1, A9.
Mumford, L. (1964), ‘‘Authoritarian and Democratic Technics.’’ Technology and Culture, 5, pp. 1–8.
Nolan, D. (2004), ‘‘Tree Sitters Called ‘Real Heroes.’’ Hamilton Spectator, August 6, 2004.
Oreskes, N (2004), ‘‘Science and Public Policy: Whats Proof Got to Do With It?.’’ Environmental Science and Policy, 7, pp. 369–383.
Pielke, R. A., Jr. (2004), ‘‘When scientists politicize science: making sense of controversy over The Skeptical Environmentalist.’’ Environmental Science and Policy, 7, pp. 405–417.
Salter, L. (1988), Mandated Science: Science and Scientists in the Making of Standards, Dordrecht: Kluwer Academic Publishers.
Sarewitz, D. (1996), Frontiers of Illusion: Science, Technology and the Politics of Progress, Philadelphia: Temple University Press.
Sarewitz, D. (2004), ‘‘How Science Makes Environmental Controversies Worse.’’ Environmental Science and Policy, 7, pp. 385–403.
Sarewitz,D.(2006),‘‘LiberatingSciencefromPolitics.’’AmericanScientist,94,pp.104–106.
Shrader-Frechette, K. S. (1991), Risk and Rationality: Philosophical Foundations for Populist Reforms, Berkeley: University of California Press.
Somers, C. M., B. E. McCarry, F. Malek, and J. S. Quinn (2004), ‘‘Reduction of Particulate Air Pollution Lowers the Risk of Heritable Mutations in Mice.’’ Science, (14 May), pp. 1008–1010.
Statistics Canada, Environment Accounts and Statistics Division (2000), Expenditures on Environmental Protection by Industry and Activity (Catalogue no. 16F0006XIE). On-line database. Statistics Canada.
Winner, L. (1977), Autonomous Technology: Technics-Out-Of-Control as a Theme in Political Thought, Cambridge: MIT Press.
World Commission on Environment and Development (1987), Our Common Future, Oxford: Oxford University Press.
Contemporary Studies and Philosophy
Wilfrid Laurier University, Brantford
73 George Street, Brantford, ON, Canada N3T 2Y3,
Phone: +1-519-7568228 ext 5705 E-mail: shaller@wlu.ca
Department of Philosophy and Religious Studies
Cape Breton University
1250 Grand Lake Road, Sydney, NS, Canada B1P 6L2,
Phone: +1-902-5631238 E-mail: jim_gerrie@capebretonu.ca
 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

No comments:

Post a Comment

Leadership Trends in Common Wealth Bank

Overview of Common Wealth Bank of Australia Commonwealth bank of Australia is one out of four largest integrated financial institutions. T...