- Home
- Naomi Oreskes
Why Trust Science? Page 7
Why Trust Science? Read online
Page 7
An interesting recent case is the “extended evolutionary synthesis” (EES) concept, which challenges the primacy of genetic control in inheritance and calls increased attention to developmental plasticity, environmental modification by organisms (including niche construction), epigenetics, and social learning.111 Some advocates of EES have been disturbed by resistance they have encountered among “traditionalists” in the evolutionary biology community, who argue that the existing evolutionary synthesis is adequate and no extension is needed.112 The ensuing arguments have sometimes become hostile and personal.113 To a historian familiar with past major debates in science, it is not surprising that there is resistance to new ideas that threaten the stability of past scientific achievements or the social position of their adherents, nor that this resistance at times gets heated.114 When people’s life work is being questioned, they may get testy. No one likes to be told that they are wrong. The important question here is whether the advocates of EES have been able to publish their views in respected journals and to obtain funding for their research. The answer is yes. Despite the flaring of tempers, the evolutionary biology community as a whole has proved open to the introduction of new ideas and the critical interrogation of old ones.
A second caveat is that my argument is by no means a call for blind or blanket trust, much less a slavish adherence to scientists’ recommendations on non-scientific matters. It is a call for informed trust in the consensual conclusions of scientific communities, but not necessarily in the views or opinions of individual scientists, particularly not when they stray outside their domains of expertise. Indeed, the track record of scientists outside their specialties is not particularly impressive. One need only think of physicist-mathematician John von Neumann claiming in the 1950s that within a few decades nuclear energy would be as free as the “unmetered air,” or physicist William Shockley’s insistence that African Americans were genetically inferior to whites and should be paid to undergo “voluntary” sterilization.115 Werner von Braun thought that by the year 2000, the first child would have been born on the moon.116 Physical scientists, particularly in the United States, have tended to be technofideists, exaggerating the rate at which new technologies would be developed or the degree to which they would improve our lives. Both physical and life scientists have an unhappy record of insensitivity to social and ethical concerns, as witnessed by the widespread support among biologists in the early twentieth century for eugenics programs that in hindsight appear both scientifically erroneous and morally noxious (see chapter 2). Outside their domains of expertise, scientists may be no more well informed than ordinary people. Indeed, they may be less so as their intense training in one area can lead them to be undereducated in others.117
The claim that scientists have expertise in particular domains is not, moreover, to insist that this expertise is exclusive. Many lay people—farmers, fishermen and women, patients, midwives—have expertise in their particular domains.118 Patients may have considerable understanding of the progression of their disease or the side effects of pharmaceuticals; midwives may be able to recognize problems in pregnancies as well or better than some obstetricians. There was extensive scientific knowledge in India before the arrival of the British, particularly about matters that the British would label “natural historical” (but locals might not have labeled this way).119
We have a considerable literature on indigenous expertise: the knowledge that both lay people and experts may have about plants, animals, geography, climate, or other aspects of their natural environments and communities. In recent decades we have come to understand more fully the empirical knowledge systems that have developed outside of what we conventionally call “Western science”—what anthropologist Susantha Goonatilake has called “civilizational knowledge.” These systems may involve highly developed expertise, and may be quite effective in their realms.120 For example, Traditional Chinese Medicine (TCM), acupuncture, and Ayurvedic medicine can be efficacious in treating certain diseases and conditions for which Western medicine has little to offer.121 Civilizational knowledge traditions have authority in their regions of origin by virtue of track records of success, and in some cases (e.g., acupuncture) have demonstrated efficacy beyond those regions as well. Moreover, the study of civilizational knowledge has highlighted the values embedded in Western science that often go unrecognized or are even denied by its practitioners.122
There are also lay knowledge traditions based on sustained empirical and analytical engagement with the world. Hunter-gatherer societies, for example, typically have detailed empirical knowledge of plant distributions and animal migrations; anthropologist Colin Scott, for example, has demonstrated that Cree hunting traditions are highly empirical, and argues that they are therefore rightly viewed as scientific.123 Where lay knowledge overlaps with scientific knowledge, one should not assume that the latter is necessarily superior to the former.124 We know, for example, that Polynesian navigators were far more effective in plying the Pacific than their European counterparts until at least the time of Cook in the late eighteenth century.125
There is an important distinction to be made here: respecting indigenous, lay, and “Eastern” knowledge that has demonstrated empirical adequacy or clinical efficacy is a very different thing from accepting popular claims that are ignorant, erroneous, or represent motivated disinformation. The claims of an actress that vaccines cause autism or an oil executive that recently observed climate change has been caused by sunspots do not come out of established knowledge traditions; the individuals promoting them do not have a credible claim to expertise. An actress is not an immunologist; a petroleum industry CEO is not a climate scientist. And in these particular cases, we have abundant empirical evidence that their claims are untrue. The claim that climate change is caused by sunspots has had its day in scientific court: it has been vetted by evidence and shown to be incorrect.126 Autism is no more common among children who have been vaccinated than those who have not.127 Respecting alternative knowledge traditions does not mean that we suspend judgment, either about those traditions or our own.
It is also important to distinguish between the scientific and the normative questions that get mooted in contemporary society. To be sure, the interrelations between the various sciences and the politics, economics, and morality that surround and embed them are often complex, intercalated, and not easily disentangled; some scholars have argued that they cannot be disentangled.128 I believe that, however imperfectly, we can distinguish between the scientific and normative aspects of many questions—and that we continue to need to. Whether man-made climate change is underway is a different sort of question from what we should do about it; I may have reasons for declining vaccination that have nothing to do with its alleged relation to autism.129 These distinctions matter, because if I understand that some of my fellow citizens reject vaccinations on religious grounds, I may respect that opinion without succumbing to the fallacy that vaccines cause autism; depending on my own religious views, I might join them or I might not. Similarly, I can respect the fact that many people have had adverse reactions from pharmaceuticals and know that iatrogenic illness is a real thing, without accepting the allegation that it is the drug AZT, rather than a virus, that causes HIV-AIDs.130 Pope Francis rejects genetically modified organisms as an inappropriate interference with God’s domain; if I were Catholic I might choose to follow his views irrespective of the scientific evidence as to whether those products are safe to eat.131 Distinctions between the scientific and the social matter, because they rightly affect our choices, and because us they help to distinguish between arguments that may be persuasive to our audiences and arguments that are doomed to fail because they don’t address their underlying concerns.
Comte argued long ago that the basis for the success of science was experience and observation. We now know that that is only part of the story, albeit an important part. Nevertheless, we can use this argument to remember that the basis for our trust in science is, in fact,
experience and observation—not of empirical reality, but of science itself. It is what Comte argued long ago: that just as we can only understand the natural world by observing it, so we can only understand the social world by observing it. When we observe scientists, we find that they have developed a variety of practices for vetting knowledge—for identifying problems in their theories and experiments and attempting to correct them. While these practices are fallible, we have substantial empirical evidence that they do detect error and insufficiency. They stimulate scientists to reconsider their views and, as warranted by evidence, to change them. This is what constitutes progress in science.
Coda: Why Not the Petroleum Industry?
We can now answer the question raised at the outset of this chapter of ex ante trust. Why should the conclusions of climate scientists about climate change be viewed ex ante as more authoritative than those originating from the petroleum industry? Or arguments about cancer and heart disease from the tobacco industry? Or about diabetes or obesity from Coca-Cola?132
The answer is simple: conflict of interest. The petroleum industry exists to explore for, find, develop, and sell petroleum resources, and by doing so to make a profit and return value to shareholders. It relies heavily on science and engineering to do this, and company scientists and executives have considerable expertise in the domains of sedimentary geology, geophysics, and petroleum and chemical engineering, as well as sales and marketing. But recent scientific findings about the reality and severity of anthropogenic climate change—and the role of greenhouse gases derived from fossil fuel combustion in driving it—threaten not only the industry’s profitability, but even its existence. The fossil fuel industry as we know it is fighting for its survival. Rather than accept the necessity of change, certain elements in the industry have misrepresented the scientific evidence that demonstrates that necessity.133 Exxon Mobil may be a reliable source of information on oil and gas extraction, but it is unlikely to be a reliable source of information on climate change, because the former is its business and the latter threatens it.134
We may say the same about the tobacco industry. For decades, the tobacco industry refused to accept the scientific evidence that tobacco products caused cancer, heart disease, bronchitis, emphysema, and a host of serious conditions and fatal diseases, including sudden infant death syndrome. It worked to challenge, discredit, and suppress known information, and it paid scientists to engage in research that was in other respects legitimate, but whose purpose (from the industry standpoint) was to distract attention from the adverse effects of tobacco use. The chemical industry has done much the same with respect to pesticides and endocrine disrupting chemicals; in recent years we have seen some of the same strategies and tactics taken up by elements of the processed food industry.135 The tobacco, processed food, and chemical industries face an essential conflict of interest when discussing scientific results that bear on the safety, efficacy, or healthfulness of their products. They are not engaged in good faith in the open, critical, and communal vetting of evidence that is crucial for the determination of the reliability of scientific claims. This is why, ex ante, we have reason to distrust them.
This is not to say that an individual scientist or team of scientists is necessarily discredited simply because they work in or for a potentially conflicted industry or have received funding from it. Scientists within an industry may participate in the scientific enterprise by doing research and submitting it for publication in peer-reviewed journals, and there are many fine examples of this, particularly in the early twentieth century when many corporations ran large industrial research laboratories. (Full disclosure: my own PhD work was partly funded by the mining company for whom I worked before going to graduate school, and this was disclosed in my relevant publications.)
When industry-funded scientists attend conferences and publish in peer-reviewed journals, they are acting as parts of scientific communities, participating in the norms of those communities and subjecting themselves and their work to critical scrutiny. As long as they do—so long as the norms of critical interrogation are operating and conflicts of interest are forthrightly disclosed and where necessary addressed—these scientists may well make fine contributions.136
But it is scarcely a secret that the goals of profit-making can collide with the goals of critical scrutiny of knowledge claims. We know from history that industrial research can be of high quality, but we also know that it exists—and is subject to external scrutiny—at the discretion of the industrial sponsor. Excellent research has emerged from the precincts of American business and industry, but so has disinformation, misrepresentation, and distraction. Science done within industries has won Nobel prizes; it has also been subject to suppression and distortion. Moreover, as Robert Proctor, Allan Brandt, David Rosner, Gerald Markowitz, Miriam Nestle, Erik Conway, and I have documented, a substantial amount of industry research has been designed to be a distraction.137 Empirical reality tells us that we are right to be suspicious when the petroleum industry makes claims about climate science or the soda industry offers up nutritional claims, just as we should have been suspicious when the tobacco industry told us that Luckies were good for us and Camels would aid our digestion.138
The checkered history of scientific research in American industry that was designed to distract, confuse, and/or misinform also helps us to address one of the more nefarious strategies of industry—doubt-mongering: the claim that they are instantiating the spirit of scientific inquiry when they pose skeptical questions and that it is scientists who are being dogmatic. This is an intellectually noxious move, because it takes the strength of science and turns it into a weakness, and falsely imputes scientific motives to activities that are intended to undermine science. Moreover, when scientists are unfairly attacked, they may become defensive and therefore less open to warranted critique than they should be. In this regard doubt-mongering is doubly damaging: it undermines public trust in science and it has the potential to undermine science itself.
The processes of critical interrogation rely on an assumption of good faith: that participants are interested in learning and have a shared interest in truth. It assumes that the participants do not have an intellectually compromising conflict of interest. When these assumptions are violated—when people use skepticism to undermine and discredit science rather than to revise and strengthen it, and to confuse audiences rather than to inform them—the entire process is disabled.139 It can lead scientists to want to shut down criticism rather than embrace it. After all, it is challenging to maintain a spirit of openness in the face of dishonesty. The critics of science do not strengthen it, as they sometimes claim; they damage it.
For this and many other reasons, there is no guarantee that the methods of scientific scrutiny will operate as intended. In the next chapter, I examine historical examples where, in hindsight, we may say that scientists went astray. We will see what we may be able to learn from those examples as to when we are justified in not trusting science. But for the purposes of the present argument, the key point is this: We have an overall basis for trust in the processes of scientific investigation, based on the social character of scientific inquiry and the collective critical evaluation of knowledge claims. And this is why, ex ante, we are justified in accepting the results of scientific analysis by scientists as likely to be warranted.
Chapter 2
SCIENCE AWRY
If you google “How old is the Earth?” the first answer you will be offered is 4.543 billion years old. This is the accepted scientific value based on radiometric dating of asteroids and lunar materials. You will find it affirmed if you visit the web page of NASA, the US Geological Survey, or the Encyclopedia Britannica. It has been more or less in place for half a century and most educated Americans accept it as factual. It is what any mainstream earth science professor or teacher would teach, and what you will find in any college earth science textbook. However, if you scroll down on your computer you will also find:
How old is the earth?—creation.com
creation.com/how-old-is-the-earth
Creation Ministries International
The answer offered by Creation Ministries International, based upon biblical exegesis, is about six thousand years. If we were to judge a knowledge claim by the antiquity of its provenance, we would have to judge this claim to be more stable than the accepted scientific one, because it has been around since the mid-seventeenth century. Similarly, if we were to define authority as the ability to drive out competing claims, then the authority of the scientific value is clearly by no means total. This is not merely the case for the age of the Earth. If we look for answers about climate change, the safety of vaccinations, whether plate tectonics is an accurate and adequate theory of global tectonics, and if drinking water fluoridation prevents cavities, we will find many claims competing for our attention.
Some of these claims are simply unscientific—which is to say not based on vetted evidence—while others have been shown by evidence to be false. Yet they persist. Indeed, the fragile status of facts—both scientific and social—is now so widely acknowledged that the Oxford English Dictionaries declared the 2016 word of the year to be “post-truth.”1 Comedian Stephen Colbert complained that this was a rip-off of his earlier neologism “truthiness.”2