Why Trust Science? Page 15
If we can establish that there is a consensus among the community of qualified experts, then we may also want to ask
Do the individuals in the community bring to bear different perspectives? Do they represent a range of perspectives in terms of ideas, theoretical commitments, methodological preferences, and personal values?
Have different methods been applied and diverse lines of evidence considered?
Has there been ample opportunity for dissenting views to be heard, considered, and weighed?
Is the community open to new information and able to be self-critical?
Is the community demographically diverse: in terms of age, gender, race, ethnicity, sexuality, country of origin, and the like?
This latter point is needs further explication. Scientific training is intended to eliminate personal bias, but all the available evidence suggests that it does not and probably cannot. Diversity is a means to correct for the inevitability of personal bias. But what is the argument for demographic diversity? Isn’t the point really the need for perspectival diversity?
The best answer to this question is that demographic diversity is a proxy for perspectival diversity, or, better, a means to that end. A group of white, middle-aged, heterosexual men may have diverse views on many issues, but they may also have blind spots, for example, with respect to gender or sexuality. Adding women or queer individuals to the group can be a way of introducing perspectives that would otherwise be missed.
This is the essential point of standpoint epistemology, raised particularly by the philosopher Sandra Harding (chapter 1). Our perspectives depend to a great extent on our life experience, so a community of all men—or all women for that matter—is likely to have a narrower range of experience and therefore a narrower range of perspectives than a mixed one. Evidence from the commercial world supports this point. Studies of gender diversity in the workplace show that adding women in leadership positions increases company profitability—but only up to a point. That point is about 60%. If a company’s leadership becomes all or nearly all female, then the “diversity bonus” begins to decline, as indeed, if the argument here is correct, it should.200
It may not always be easy to answer the questions posed above in the affirmative, but it is often obvious if the answer to any of them is negative. Moreover, we may (and likely will!) identify individuals in the community who are arrogant, closed-minded, and self-important, but on the social view of epistemology the behavior of any particular individual is not what matters. What matters is that the group as a whole includes enough diversity and maintains sufficient channels for open discussion that new evidence and new ideas have a fair chance of a fair hearing.
The philosopher Heather Douglas has argued that when the consequences of our scientific conclusions are non-epistemic—i.e., when they are moral, ethical, political, or economic—it is almost inevitable that our values will creep into our judgments of evidence.201 (Liberals, for example, may have been quicker to accept the scientific evidence of climate change, because they were more comfortable with its implied consequences for government intervention in the marketplace.) Therefore, the more socially sensitive the issue being examined, the more urgent it is that the community examining it be open and diverse.
But sometimes an issue that appears to be purely epistemic isn’t, and scientists may claim they are evaluating an issue on solely epistemic grounds even when they are not.202 This suggests that no matter what the topic, it behooves the scientific community to pay attention to diversity and openness in its ranks and to remain open to new ideas, particularly when they are supported by empirical evidence or novel theoretical concepts. It means, for example, that in considering dissenting views in grant proposals or peer-reviewed papers, it is probably better to err on the side of tolerance than critique. Many scientists consider it extremely important to be intellectually tough, if not actually rough, but sometimes toughness can have the unintended effect of shutting down colleagues, particularly those who are young, shy, or inexperienced. It is important to be tough, but it may be more important to be open.
In chapter 1, I argued that the advocates of the Extended Evolutionary Synthesis are receiving a thorough hearing, if not always a polite one. Something similar happened to Alfred Wegener. He was not a neglected genius: his papers were published in peer-reviewed journals and his work had a hearing—albeit not always a gracious one. The socialist opponents of eugenics likewise got their manifesto published in Nature.203 None of these dissenters was “shut down” by the scientific hierarchies of their day.
Fast forward to the present: Many AIDS researchers lament Peter H. Duesberg, the University of California molecular biologist who does not accept that AIDS is caused by a virus. By his own account he has “challenged the virus-AIDS hypothesis in the pages of such journals as Cancer Research, Lancet, Proceedings of the National Academy of Sciences, Science, Nature, Journal of AIDS, AIDs Forschung, Biomedicine and Pharmotherapeutics, New England Journal of Medicine and Research in Immunology.”204 Whether he is right or wrong, tolerated or vilified, the fact is that he has had a hearing, and at the highest level of American and international science. His colleagues have not shut him down; they have published his work and considered his arguments. But they remained unconvinced.205 There is a difference between being suppressed and losing a debate. Sometimes a “skeptic” is just a sore loser.
CODA
Values in Science
Some people worry that overconfidence in the findings of science or the views of scientists can lead to bad public policy.1 I agree: overemphasizing technical considerations at the expense of social, moral, or economic ones can lead to bad decisions.2 But this does not bear on the question of whether the science involved is right or wrong. If a scientific matter is settled and the scientific community that has settled on it is open and diverse, then it behooves us to accept that science and then decide what (if anything) to do about its implications.
This, at least, is what nearly every scientist I know would say. It is something that in the past I have said. It actualizes the classic fact/value distinction: the idea that we can identify facts and then (separately) decide what if anything to do about them based on our values. But as an empirical matter this strategy is no longer working (if it ever did), because most people do not separate science from its implications.3 Many people reject climate science, for example, not because there is anything wrong with that science, qua science, but because it conflicts—or is seen as conflicting—with their values, their religious views, their political ideology, and/or their economic interests.4 There are many reasons people may reject or be critical of scientific findings, but often it involves the perception that these findings contradict their values or threaten their way of life.
In the 1960s, many people on the political left criticized science because of its uses in warfare.5 Today, many on the right critique it because of the way in which it exposes flaws in contemporary capitalism and the American way of life.6 In a discussion of anthropogenic climate change prior to the 1992 Earth Summit in Rio de Janeiro, US president George H. W. Bush insisted that “the American way of life is not up for negotiation. Period.”7 The president signed the UN Framework Convention on Climate Change—the international treaty that emerged from the Rio meeting—and promised to act upon it. Yet, at the same time, he identified a clash between the implications of the findings of environmental science and the highly consumptive American lifestyle. At least some environmentalists were blaming that lifestyle for environmental ills and thus wanting to change it. This pattern persists and helps to explain why Republicans are so much more skeptical of climate science than Democrats.8 Indeed, it is the only thing that explains why some conservatives insist that proposals to act on climate change are anti-democratic, anti-American, and/or anti-freedom.9
If we ask scientists, “Why do evangelical Christians reject evolutionary biology?” many would answer that it is because they make a literal reading of the Bible, insisting that G
od created the Earth and everything on it in six days. But as Catholic evolutionary biologist Kenneth Miller has pointed out, evangelical arguments against evolutionary theory rarely involve literal interpretations of the Bible. In fact, they rarely involve biblical exegesis at all.10 Rather, they invoke the perceived moral (or amoral) implications of a theory that says humans arose by chance, the outcome of a nonpurposeful, random process. Former Pennsylvania senator and two-time presidential candidate Rick Santorum, for example, has explained that he rejects the concept of evolution by natural selection because it makes humans into “mistakes of nature,” and in doing so obliterates the basis for morality.11 Other anti-evolutionists argue that if evolution is true, then life has no meaning.
Scientists attempt to escape the sting of these extra-scientific considerations by retreating into value-neutrality, insisting that while our science may have political, social, economic, or moral implications, the science itself is value-free.12 Therefore, values are not legitimate grounds on which to reject it. Gravity doesn’t care if you are a Republican or Democrat. Acid rain falls on both organic farms and golf courses. Radiative transfer in the atmosphere functions today just as it did before the last election.
This argument is true but insufficient, because, whether or not they should, our audiences link science to its implications. Evangelical Christians reject evolutionary science because they believe it contradicts their religious beliefs. Evangelical free-marketers reject climate science because it exposes contradictions in their economic worldview. And because of these contradictions, they distrust the scientists responsible for them. This is hard to get around, particularly when we acknowledge that there is no singular scientific method that warrants the veracity of scientific conclusions, and that science is simply the consensus of relevant experts on a matter after due consideration
A view of scientific knowledge as the consensus of experts inevitably brings us to the question of who scientists are and on what basis they should be trusted. Scientists typically consider such questions to be ad hominem and therefore illegitimate. But if we take seriously the conclusion that science is a consensual social process, then it matters who scientists are.13
It is beyond the scope of this volume to consider what social scientists have discovered as to how trust is created and sustained, but one thing we know is that it is easier to establish trust among people with shared values than among those without them.14 Yet values are precisely what scientists, by and large, decline to discuss. Like the question of who the scientist is, the question of what scientists believe in—other than science itself—has been considered to be off-limits. When their objectivity or integrity is questioned, scientists characteristically retreat, as sociologist Robert Merton noted decades ago, into the “exaltation of pure science,” insisting that their only motivation is the pursuit of knowledge.15 Whatever the implications of scientific findings, they insist that the enterprise itself is value-neutral.
Merton thought this made sense: he believed that public trust in science was directly linked to the perception of its independence from outside interests—what he called “extra-scientific considerations.” This is the reason that scientists defend the “pure science ideal.”16
One sentiment which is assimilated by the scientist from the very outset of his training pertains to the purity of science. Science must not [in this view] suffer itself to become the handmaiden of theology or economy or state. The function of this sentiment is to preserve the autonomy of science. For if such extra-scientific criteria of the value of science as presumable consonance with religious doctrines or economic utility or political appropriateness are adopted, science becomes acceptable only insofar as it meets these criteria. In other words, as the “pure science sentiment” is eliminated, science becomes subject to the direct control of other institutional agencies and its place in society becomes increasingly uncertain.… The exaltation of pure science is thus seen to be a defense against the invasion of norms that limit directions of potential advance and threaten the stability and continuance of scientific research as a valued social activity.17
For Merton, value-neutrality as a regulative ideal was important not only in helping scientists to maintain objectivity in their research practices, but also in helping to maintain the public perception of them as fair, objective, and dedicated to the pursuit of truth (as opposed to the pursuit of power, money, status, or anything else). To the extent that scientists were seen to be pursuing goals other than scientific ones, people could and likely would distrust them. Merton therefore thought scientists right to defend the pure science ideal, and he would likely have been dismayed by the ways in which contemporary scientific leaders promote economic utility as the primary rationale for research, universities aggressively pursue private sector support for “basic” research, and individual scientists embrace private profit as a motive in creating biotech start-ups and other entrepreneurial endeavors.
But Merton was a sociologist, not a historian, and his full-throated defense of the pure science ideal sits in tension with the historical reality that scientists have always had patrons with motivations of their own, and which only rarely involved the pursuit of knowledge for its own sake. Seen this way, the idea of science as a value-neutral activity is a myth.18 Utility—economic or otherwise—has long been a justification for the support of science, in terms of both finance and cultural approbation. Historian John Heilbron has demonstrated that the Catholic Church supported astronomy in the middle ages because it needed astronomical data to determine the date of Easter.19 Merton himself was known for the argument that modern science thrived—and took its contemporary form—in seventeenth-century England because its emphasis on utility resonated with dominant Puritan values.20 In my own work I have shown how the US Navy supported basic research in oceanography in the Cold War for its value in anti-submarine warfare and subsurface surveillance.21
Biology has long been supported by governments and philanthropists because of its value in medicine and public health; geology for finding useful resources (and also for its power to deepen our appreciation of God); physics for its use in technology; climate science for its relation to weather forecasting.22 To suggest that utility has no place in the motivation for science is to ignore centuries of history. And utility is inescapably linked to values: health, prosperity, social stability, and the like. To say that something is useful is to say that we value it, or that it preserves, protects, or fosters something that we value.
If science as an enterprise is not value-neutral, neither are scientists as individuals. No one can be truly value-neutral, so when scientists claim that they are, it comes across as false, for they are claiming the impossible. Unless we accept them as idiot savants or naïfs, we may come to see them as dishonest. Yet, honesty, openness, and transparency are said to be key values in scientific research. How can scientists be honest and at the same time deny that they have values? Scientists generate a contradiction at the root of their enterprise if, while insisting on its honesty, they mislead their audiences (even if unintentionally) about its character.
It may be objected that scientists are not claiming that they have no values, but only that they do not allow those values to influence their scientific work. That is a claim that is impossible to prove or disprove, but one that both social science research and common sense suggests is unlikely to be true. This leads us to a further point, one that somehow has escaped serious consideration but which may be at the heart of the distrust of science felt by many Americans. To say that science is value-neutral is more or less equivalent to saying that it has no values—at least none other than knowledge production—and this can elide into the implication that scientists have no values. Clearly this is not the case, but scientists’ reluctance to discuss their values can give the impression that their values are problematic—and need to be hidden—or perhaps that they have no values at all. And would you trust a person who has no values?
In chapter 2, I posed the questi
on: What are the relative risks of ignoring scientific claims that turn out to be true versus acting on claims that later turn out to have been incorrect? The answer to this question depends on values. As Erik Conway and I showed in Merchants of Doubt, fights about climate science have for the most part been fights about values. Some influential people in the 1980s and 1990s believed that the political risks of government intervention in the marketplace were so great as to outweigh the risks of climate change, and so they discounted, disparaged, or even denied the scientific evidence of the latter. As these positions became adopted by Libertarian think tanks, and then aligned with the Republican Party, it became normal for Republicans to engage in climate change denial, either actively or passively. Then climate change skepticism became normal for anyone suspicious of “big government,” which meant many business people, older men, evangelical Christians, and people living in rural America. As the evidence of climate change was accumulating all around them, skeptics insisted that even if the climate were changing, it was not serious, or it was not “caused by us.” Because if it were serious and we had caused it, then we would have to do something about it, and that something in some way or another would involve governance. Thus, climate change denial became normalized in American life, and with it the denial of evidence and ultimately of facts. This is a deeply troubling state of affairs, but the values that have underpinned climate denial cannot be summarily rejected as wrong or false.23
We can argue about the relative merits of government large and small, and the relative risks of under- versus over-regulation of markets, but any such argument will be (at least in part) value-driven. If we are to have such conversations honestly, then we must talk about our values. Different people may view the same risk differently, but this does not mean they are stupid or venal. The scientific evidence of anthropogenic climate change is clear—as is the evidence that vaccines do not cause autism and that flossing your teeth is beneficial—but our values lead many of us to resist accepting what the evidence shows.