- Home
- Naomi Oreskes
Why Trust Science? Page 5
Why Trust Science? Read online
Page 5
T. S. Kuhn and the Emergence of Science Studies
Thomas Kuhn’s point of entry was to hoist the empiricists on their own petard: to assert that the empiricists have not been sufficiently empirical about science itself. His own work was grounded in the history of science through his early study of the Copernican Revolution—the topic of his first book—and his work at Harvard with James Conant developing a set of educational modules known the Harvard Case Histories in Experimental Science.61 But Kuhn was also deeply engaged with arguments in philosophy of science and had read both Fleck and Quine, as well as works of the Vienna Circle.62
One of the central points of Kuhn’s Structure of Scientific Revolutions was the same as Fleck’s: scientists do not work alone but rather in communities that share not just theories about empirical reality—such as the theory of relativity or the theory of evolution by natural selection or the theory of plate tectonics—but also values and beliefs about how their science should operate. Together with models of exemplary scientific accomplishment (“exemplars”), these theories, values, and intellectual and methodological commitments collectively constitute the “paradigm” under which the community operates. This community aspect is paramount: in a 1979 forward to the first English translation of Fleck, Kuhn stressed that in the contemporary scientific world, a person working alone is more likely to be dismissed as a crank than accepted as a maverick.63
Most of the time, scientists do not question their paradigms, they work within them, solving problems and answering questions that the framework identifies as relevant. Kuhn called this “normal science” and asserted that its principal activities were a form of puzzle solving. Contra Popper, during normal science scientists do not attempt to refute the paradigm. In fact, they do not even question it—until a problem arises. This is where the engagement of science with reality becomes most evident: problems arise because some observation or experience of the world—some “technical puzzle”—does not fit expectation.64 Kuhn calls these “anomalies.” At first, scientists will attempt to account for the anomaly within the paradigm, perhaps making some modest adjustment in it. But if the anomaly becomes too great or too glaring, or the adjustments made to accommodate it generate new problems, this creates a crisis, which opens the intellectual space for reconsideration of the paradigm. Sometimes crises are resolved within the paradigm, but when they cannot be, a scientific revolution occurs: the governing paradigm is overthrown and replaced by a new one. It is like a political revolution, insofar as the new paradigm is in effect a new form of intellectual governance, with new rules and regulations. Kuhn thus argued that science advances neither by verification nor refutation, but by paradigm shifts.
Many scientists welcomed Kuhn’s views insofar as they painted a picture of science that was recognizable to them, or at least more recognizable than the alternatives.65 But what fired up the many readers who were not scientists was a claim that most scientists probably didn’t understand and wouldn’t have liked if they had (and what distinguishes Kuhn from Fleck): that successive paradigms are incommensurable. By this Kuhn meant, literally, that there was no metric by which a new paradigm could be compared to the one it proposed to replace. As Fleck had argued, the new paradigm—like the new thought-style—was not just a shift in thinking about a particular scientific question, it was also a shift in meanings, values, priorities, aspirations, and even the self-identity of the scientist. This opened still wider the question that Quine had posed: How do scientists decide which part of their belief structure needs to be revised in light of an anomaly? How do they decide whether a small adjustment is sufficient or a scientific revolution is in order? And if the new paradigm is incommensurable with the one it proposes to replace, on what basis do scientists make the choice to accept it?
Historians and philosophers have been debating these questions ever since. Philosophers were vexed by the incommensurability claim, insofar as it seemed to reduce paradigm choice to relativism and even irrationality.66 Imre Lakatos, for example, opined that in Kuhn’s theory, the scientific revolution is “a mystical conversion which is not and cannot be governed by rules of reason.”67
Historians felt validated that Kuhn insisted on the detailed study of real science, but tended to find the incommensurability claim to be overblown, and noted that Kuhn had made a methodological error by sometimes comparing non-proximate scientific theories, such as Aristotelian physics and quantum mechanics. Yes, historians acknowledged, Aristotelian physics is inscrutable to a contemporary physicist, but there have been many intermediate steps between then and now; it does not work to try to understand the entire arc of the history of physics without tracing these intermediate steps. It would be like analyzing a relay race thinking that the baton had been thrown rather than passed.
My own view is that Kuhn was closer to the mark in his less famous earlier work The Copernican Revolution, in which he described a major scientific change as a bend in the road:
From the bend, both sections of the road are visible. But viewed from a point before the bend, the road seems to run straight to the bend and disappear.… And viewed from a point in the next section, after the bend, the road appears to begin at the bend from which it runs straight on.68
Kuhn’s work was itself a bend in the road of studies of science: away from method and toward practice; away from individuals and toward communities.69 Scholars generally agree that the largest impact of Kuhn’s work—besides adding the term paradigm shift to the general lexicon—was in helping to launch the field of science studies.
Away from Method
Philosophers from Comte to Popper attempted to identify the method of science that accounted for its success and therefore justified our acceptance of scientific claims as true—what is sometimes called “warranted true belief.” Kuhn did not exactly say that there was no method, but he did say two things that displaced method from centrality. The first was the claim that under different paradigms, methods could change. The second was that most of the time, the methods of science amounted to not much more than puzzle-solving—working out details within the paradigm without questioning the larger structure—and that seemed pretty uninteresting. Moreover, whatever the methods were, they were done by groups of people working together, not individuals working alone.
This opened the door for an expanded sociology of science that not only examined the formal institutional structures of science, as previous sociologists had done, or the norms of scientific behavior, as the famous sociologist of science Robert Merton had studied, but addressed the epistemological question: What is the basis for scientific belief? If the intellectual action in science is in the paradigm shift, and if paradigms are incommensurable, then our traditional notions of scientific progress are clearly unsupportable. Perhaps science does not give us warranted true belief. Perhaps we should not trust science. If scientists can abandon one view and replace it with another incommensurable one, that does not inspire confidence in the idea that the processes of science necessarily provide us with a reliable view of the world. In any case, someone needs to explain the grounds on which scientists accept the claims they do.
Sociology of Scientific Knowledge and the Rise of Science Studies
Sociologists who took up the gauntlet thrown down by Kuhn called further attention to the social elements responsible for scientific conclusions, or what has come to be known as the social construction of scientific knowledge.70 While they saw themselves as epistemological radicals, they were building on what had come before, particularly Quine’s formulation of under-determination. They now asked: On what grounds do scientists decide what to believe and what to reject? How are these decisions articulated within the frameworks of scientific communities? To what degree, if any, should we respect the claims that emerge from this process?
The most influential of these early efforts came from the group of scholars we have come to know as the Edinburgh school, particularly Barry Barnes, David Bloor, and Steven Shapin. Barnes concentrat
ed on “interests” as a driving force in theory choice. These “interests” could be professional, in the sense that the success of a favored theory would benefit the career of its promoter, or there could be an interest in a particular value set or a theory that was consistent with one’s political, religious, or ethical views.71 (In hindsight, interest theory seems oddly individualistic, but that is another matter.) Bloor insisted that the methods of science studies should be “symmetrical,” meaning that “the same types of cause would explain, say, true and false beliefs.”72 Shapin attended particularly to the interrelationship between knowledge production and social order, arguing memorably, with historian Simon Schaffer, that “solutions to the problem of knowledge are solutions to the problem of social order.”73
The arguments of the Edinburgh school were often taken to be ontologically anti-realist, and for that reason dismissed by many scientists as ridiculous.74 To be sure, some scholars wrote in a manner that suggested a disregard for, if not outright disbelief in, the significance of empirical evidence in formulating scientific knowledge. It was easy to slip from the claim that empirical evidence does not by itself determine our conclusions to the suggestion that empirical evidence plays no role. But the argument was not so much anti-realist as it was relativist: if empirical evidence cannot determine decisively what we should believe and what we should reject, it does seem to suggest that our views are framed in relation to some set of standards and concerns that cannot be deduced from, nor reduced to, empirical evidence. And if social interests and conditions play a determinative role, then our knowledge must be at least in part relative to those interests and conditions. This was a very serious challenge. As Barnes explained in the 1970s, the approach of the Edinburgh school is
sceptical since it suggests that no arguments will ever be available which could establish a particular epistemology or ontology as ultimately correct. It is relativistic because it suggests that belief systems cannot be objectively ranked in terms of their proximity to reality or their rationality.75
This was not the same as denying that our encounters with reality play a role in our convictions (much less to claim that there is no physical reality). Rather, it was to argue that the role of empirical evidence in shaping them was not nearly as determinative as most philosophers and scientists thought. Later commentators have generally allowed that the Edinburgh school was correct in stressing that evidence alone does not account for the conclusions to which scientists come.76 The question, however, was whether Edinburgh theorists were suggesting that it played little or even no role. As Barnes allowed, “Occasionally, existing work leaves the feeling that reality has nothing to do with what is socially constructed or negotiated to count as natural knowledge, but we may safely assume that this impression is an accidental by-product of over-enthusiastic sociological analysis.”77
This claim may be too generous; my own feeling is that some sociologists associated with or influenced by the Edinburgh school deliberately created this impression. When Karin Knorr-Cetina, for example, insisted in the 1980s that scientific knowledge was a “fabrication,” when Harry Collins asserted that “the natural world in no ways constrains what is believed to be,” and when Bruno Latour declared that science was “politics by other means,” these terms and phrases were clearly chosen to unsettle what the historian John Zammitto has called the “ambient idolatry of science” that had prevailed under positivism.78 Moreover, by saying that “belief systems cannot be objectively ranked,” Edinburgh scholars seemed to imply that objectivity did not play the role in science that scientists typically asserted, and perhaps played no role at all. These assertions were not accidental; they were deliberate provocations.
But not all provocations are illegitimate, and the more important point, stressed recently by David Bloor, is that if we feel the need to contrast relativism with something, we should contrast it not with objectivity—which is the opposite of subjectivity—nor with truth, which is the opposite of falsehood—but with absolutism. The opposite of relative knowledge is absolute knowledge, and no serious scholar of the history or sociology of knowledge can sustain the claim that our knowledge is absolute. Nor can we sustain the claim that empirical evidence alone suffices to explain scientific conclusions. Far too much evidence refutes that hypothesis. Bloor has always been clear that he wants to be scientific in his study of science, and to be scientific about science means to take seriously the empirical evidence about the role of empirical evidence! And that empirical evidence reveals the limits of empiricism. Bloor’s point has always been that when we look at science carefully and with an open mind, we see both empirical and social factors at play in stabilizing scientific knowledge, and we cannot assume a priori which ones are more important in any given case.79
A different critique of the notion of empirical method came from the philosopher Paul Feyerabend (1924–94). Born in Vienna, Feyerabend completed a PhD in philosophy on the topic of observation sentences and spent much of his life in conversation with Karl Popper and Imre Lakatos, laying the groundwork for what might have been a career as a leading light of logical empiricism. But he later rejected not just logical empiricism, but any attempt to define or prescribe the method of science. In his most famous work, Against Method (published in 1975), he argued that there was no scientific method, nor should there be. Scientists have used a diversity of methods to good effect; any attempt to restrict this would hamper their creativity and impede the growth of scientific knowledge. Moreover, falsification as a rule is clearly falsified by the facts of history: few if any theories in the history of science ever explained all the available facts. Often scientists ignored facts that didn’t fit or didn’t seem significant, or set them aside to worry about at a later date.80 (Popper might claim that those scientists were bad scientists, but if so then most scientists have been bad scientists, including some of our most celebrated.)
Like the science studies scholars quoted above, Feyerabend embraced a deliberately provocative style, and perhaps because he described his position as “theoretical anarchism” he is often quoted as having claimed that in science “anything goes.” But that was not his claim. The actual quotation is this:
It is clear then, that the idea of a fixed method, or a fixed theory of rationality, rests on too naïve a view of man and his social surroundings. To those who look at the rich material provided by history, and who are not intent on impoverishing it in order to please their lower instincts, their craving for intellectual security in the form of clarity, precision, “objectivity,” [and] “truth,” it will become clear that there is only one principle that can be defended under all circumstances and in all stages of human development. It is the principle: anything goes.81
Feyerabend was saying that if you pressed him to define the method of science, he would have to say that anything goes—which is to say that there is no unique method or principle of science. This was not an abdication of the responsibility to demarcate science from non-science, as Popper might have argued, but a recognition that methodological and intellectual diversity characterized the history of science, and this was a good thing: it made communities stronger, more creative, more open-minded, and nicer.82 Absolutism—whether in science, politics, or anything else—was generally objectionable.83 Like Popper (and Duhem and Comte), Feyerbend believed in progress; he just disagreed about whence it came. He summarized: “Theoretical anarchism is more humanitarian and more likely to encourage progress that its law-and-order alternatives … [and the] only principle that does not inhibit progress is: anything goes.”84 When we look seriously at what scientists do, we find that they are nothing if not creative, flexible, and adaptive.
Feyerabend was a philosopher, not a sociologist, and he accepted that science was progressive in a way that most of his sociological colleagues did not. But his work did support the sociological trend emerging strongly in the 1970s of focusing on the practices of scientists—in their labs, in the field, in clinical trials. If we cannot state a priori
what the method of science is (or methods are), then the only way to find out is through observation.
The person who since then has done the most in that regard is unquestionably Bruno Latour, who turned the techniques of anthropology to science and in doing so drew particular attention to the practices that scientists employ to persuade their colleagues to accept any particular claim. Latour’s great impact on the field was to establish ethnography as a key methodology in science studies, and to insist on the importance of privileging what scientists do over what they say.85 While the work that has followed in his wake defies easy summation, one thing is clear: it confirms earlier arguments about scientific methodological diversity. After the work of the Edinburgh school, of Feyerabend, of Latour and his colleagues, and of the diverse historians who have documented the ways scientific methods have changed over time, it is no longer plausible to hold to the view that there is any singular scientific method.86
This is not an entirely negative finding, but it does commit us to the conclusion that the dream of positive knowledge has truly ended.87 There is no identifiable (singular) scientific method. And if there is no singular scientific method, then there is no way to insist on ex ante trust by virtue of its use. Moreover, despite the claims of prominent scientists to the contrary, the contributions of science cannot be viewed as permanent.88 The empirical evidence gleaned from the history of science shows that scientific truths are perishable. How can we tell then if scientific work is good work or not? On what basis should we trust or distrust science?