Why Trust Science? Page 9
Example 2: The Rejection of Continental Drift
In the 1920s and ’30s, American earth scientists rejected a claim that forty years later was accepted as fact.42 This was the claim that the continents were not fixed, but moved horizontally across the surface of the Earth; that these movements explained many aspects of geological history; and that the interactions of moving continents explained crucial geological features, such as the distribution of volcanoes and earthquakes. This concept came to be known as continental drift. Alfred Wegener, the prominent and respected geophysicist who proposed it, compiled a large body of empirical evidence drawn from existing geological literature.
While continental drift was not accepted at the time, there was broad consensus that existing theories were inadequate and an alternative explanation of the facts of geological history was needed. When the reality of drifting continents was accepted in the 1960s, in part based on these facts (as well as new ones that had come to light), some scientists were embarrassed to acknowledge that, not very long before, their community had rejected continental drift. In response, some suggested that continental drift had been rejected in the 1920s for lack of a mechanism to explain it. This was a plausible notion, and it was enshrined in many textbooks and even repeated by some historians and philosophers of science.43 But it wasn’t true. Several credible mechanisms had been offered at the time. None of these mechanisms was flawless—newly introduced theories rarely are—but scientists at the time had vigorous discussions about them, and some thought the mechanism issue had been resolved. American geologist Chester Longwell, for example, wrote that a model involving convection currents in the mantle—an idea that in the 1960s would be accepted as part of plate tectonics—was “a beautiful theory” that would be “epoch-making.”44
If geologists had plausible mechanisms to explain continental drift, including ones that were later accepted, then why did they reject the theory? A telling element in this story was that American geologists were far more hostile to the theory than their European or British colleagues. Many continental Europeans accepted that pieces of the Earth’s crust had moved over substantial horizontal distances; this was evident in the Swiss Alps. Some British geologists also cautiously entertained the theory; in the 1950s and ’60s many British school children learned about continental drift in their O- and A-level geology courses. But this was not the case in the United States: American geologists did not just reject the idea, they accused Wegener of bad science. This offers a rare opportunity to explore how scientists decide what constitutes good or bad science.
In debates over the theory, many American geologists explicitly raised methodological objections. In particular, they objected to the fact that Wegener had presented his theory in hypothetico-deductive form, which they considered to be a form of bias. Good science, they held, was inductive. Observation should precede theory and not the other way around. Edward Berry, a paleontologist at Johns Hopkins University, put it this way:
My principal objection to the Wegener hypothesis rests on the author’s method. This, in my opinion, is not scientific, but takes the familiar course of an initial idea, a selective search through the literature for corroborative evidence, ignoring most of the facts that are opposed to the idea, and ending in a state of auto-intoxication in which the subjective idea comes to be considered an objective fact.
Bailey Willis, chairman of the geology department at Stanford University and president of the Seismological Society of America, felt the books were “written by an advocate rather than an impartial investigator.” Joseph Singewald, a geology professor at Johns Hopkins University, claimed Wegener “set out to prove the theory … rather than to test it.”45 Harry Fielding Reid, a founder of modern seismology, argued that the proper method of (all) science was induction. In 1922, he wrote a review of the English translation of Wegener’s Origin of Continents and Oceans in which he described continental drift as a member of a species of failed hypotheses based on hypothetico-deductive reasoning.
There have been many attempts to deduce the characteristics of the Earth form a hypothesis, but they have all failed … [Continental drift] is another of the same type.… Science has developed by the painstaking comparison of observations and through close induction, by taking one short step backward to their cause; not by first guessing at the cause and then deducing the phenomena.46
It has sometimes been suggested that comments such as these reflect an American rejection of theory in general. But American geologists did not reject theory per se. Many of them were actively involved in theory development in other domains. But they did have particular ideas about how scientific theories should be developed and defended. Scientific theory, they believed, should be developed inductively and defended modestly.
American geologists were suspicious of theoretical systems that claimed universal applicability and of the individuals who expounded them. One example was the “Neptunist” school, developed in the eighteenth century by Abraham Werner, which held that geological strata could be understood as the evolving deposits of a gradually receding universal ocean.47 For many American geologists, Neptunism epitomized the type of grandiosity, operating under an authoritarian leader, that Americans discerned throughout European science. On a trip to Europe, Bailey Willis met Pierre Termier, director of the French Cartographic Service, who was known for his theory of grande nappes—the concept that large portions of the European Alps could be understood as mega-folds, created when a portion of continental crust was displaced over great lateral distances. Willis lamented that Termier was “an authority,” whose theory young geologists in France “cannot decline to accept.”48
The tone with which Willis discussed Termier explains what might otherwise be perplexing in this case: Scientists are supposed to be authorities, but the concern here is that this can slide into arrogance and dogmatism. It can slide into intellectual authoritarianism; Termier’s authoritarian status could make it difficult for others to question his theory. The spirit of critical inquiry would be suppressed and scientific progress would be impeded, because no one would feel free to challenge or improve upon the idea.
The American preference for inductive methodology was thus linked by its advocates to American political ideals of pluralism, egalitarianism, open-mindedness, and democracy. They believed that Termier’s approach was typically European—that European science, like European culture, tended toward the anti-democratic. American geologists thus explicitly linked their inductive methodology to American democracy and culture, arguing that the inductive method was the appropriate one for America because it refused to grant a privileged position to any theory and therefore any theorist. Deduction was consistent with autocratic European ways of thinking and acting; induction was consistent with democratic American ways of thinking and acting. Their methodological preferences were grounded in their political ideals.
This anti-authoritarian attitude was foregrounded by scientists who propounded the “method of multiple working hypotheses.” Popularized by the University of Chicago geologist Thomas Chrowder Chamberlin, the method was an explicit methodological prescription for geological fieldwork. According to it, the geologist should not go into the field to test a hypothesis, but should first observe, and then begin to formulate explanations through a “prism of conceptual receptivity that refracted multiple explanatory options.”49 That meant developing a set of “working hypotheses” and keeping all of them in mind as work progressed. Chamberlin compared this to being a good parent, who should not allow any one child to become a favorite. A good scientist was fair and equitable to all his working hypotheses, just as a good father loved all his sons. (Chamberlin did not discuss daughters.)
The method was also a useful reminder that in complex geological problems the idea of a single cause was often wrong: many geological phenomena were the result of diverse processes working together. It was not a matter of either/or but rather both/and; the method of multiple working hypotheses helped geologists to keep
this in mind. Chamberlin thought that the bitter divisiveness that had characterized many debates in nineteenth-century geology had arisen because one side had fixed on one cause and the other side on another, rather than accepting that the right answer might be a bit of both.50 Scientists should be investigators, not advocates. Chamberlin encapsulated this idea in a paper called Investigation vs. Propagandism.51
At the University of Chicago, Chamberlin designed the graduate curriculum in geology specifically to train students to be “individual and independent, not [merely] following of previous lines of thought ending in a predetermined result”—his gloss of the European method. He also warned against the British system of empiricism, which he believed was “not the proper control and utilization of theoretical effort but its suppression.”52 (Chamberlin was thinking specifically of Charles Lyell’s denunciations of high theory.) The method of multiple working hypotheses was the via media between dogmatic theory and empiricist extremism that would help in the future to avoid divisive battles and factionalism.
One might wonder if this was just talk, but geologists’ field notebooks and classroom notes from the period show that the method of multiple working hypotheses was practiced. Observations were segregated from interpretation, and geologists frequently listed various possible interpretations that occurred to them. One example is Harvard geologist Reginald Daly, an early advocate of continental drift. His field notebooks show how he enacted Chamberlin’s prescription: in these notebooks he would record his observations on the left side of his notebook and, on the facing page, list a variety of possible interpretations of them. Reading Daly’s field notes, one is reminded of Richard Hofstadter’s famous claim that in the United States “a preference for hard work [was considered] better and more practical than commitments to broad and divisive abstractions.”53 It was not that American scientists were opposed to abstraction; it was that they were seeking a nondivisive approach to it. In the 1940s when Harvard professor Marland Billings taught global tectonics, he offered his students for their consideration no less than nineteen different theories of mountain-making, declining to say in class which one he preferred.54 In this context, we can understand why American geologists reacted negatively to Wegener’s work: He presented continental drift as a grand, unifying theory with the available evidence taken as confirmatory. For Americans, this was bad scientific method. It was deductive, it was authoritative, and it violated the principle of multiple working hypotheses. It was exactly what they expected from a European who wanted to be an authority.55
Americans, however, had become dogmatic in their anti-dogmatism, because in rejecting Wegener’s theory on methodological grounds, they dismissed a substantial body of evidence that in other contexts they accepted as correct. Many of Wegener’s harshest critics acknowledged this point, as when Yale geologist Charles Schuchert allowed that the super continent of Gondwana “was a fact” that he “had to get rid of.”56 (Schuchert’s solution was the ad hoc theory of “land bridges” to account for the paleontological evidence, but which failed to explain the correspondences in stratigraphy, which others sedulously analyzed.) In later years, geologists would acknowledge that the evidence that Wegener had marshalled was substantively correct.
Example 3: Eugenics
The history of eugenics is far more complex than the two examples we have just examined, in part because it involved a wide range of participants, many of whom were not scientists (including US president Teddy Roosevelt), and the values and motivations that informed it were extremely diverse. Perhaps for this reason some historians have been reluctant to draw conclusions from what nearly all agree is a troubling chapter in the history of science. But it has been used explicitly by climate change deniers to claim that because scientists were once wrong about eugenics, they may be wrong now about climate change.57 For this reason, I think the subject cannot be ignored, and because of its complexity I grant it more space than the two examples we have just considered.
As is widely known, many scientists in the early twentieth century believed that genes controlled a wide range of phenotypic traits, including a long list of undesirable or questionable behaviors and afflictions, including prostitution, alcoholism, unemployment, mental illness, “feeble-mindedness,” shiftlessness, the tendency toward criminality, and even thalassophilia (love of the sea) as indicated by the tendency to join the US Navy or Merchant Marine. This viewpoint was the basis for the social movement eugenics: a variety of social practices intended to improve the quality of the American (or English, German, Scandinavian, or New Zealand) people, practices that in hindsight most of us view with dismay, outrage, even horror. These practices were discussed either under the affirmative rubrics of “race betterment” and “improvement,” or the negative rubrics of preventing “racial degeneration” and “race suicide.”58 The ultimate expression of these views in Nazi Germany is well known. Less well known is that in the United States, eugenic practices included the forced sterilization of tens of thousands of US citizens (and principally targeting the disabled), a practice upheld in the Buck v. Bell decision, wherein Supreme Court justice Oliver Wendell Holmes, Jr., upheld the rights of states to “protect” themselves from “vicious protoplasm.”59
The plaintiff in Buck v. Bell was a young woman, Carrie Buck, who had been sterilized after giving birth after being raped. State experts in Virginia testified that Carrie, her mother, and her child were all “feeble-minded”; this was used to warrant Carrie’s sterilization to ensure that no further offspring would be produced. Justice Holmes encapsulated the decision in his memorable conclusion: “Three generations of imbeciles are enough.”60 Eugenic sterilization laws in the United States helped to inspire comparable laws in Nazi Germany, used to sterilize mentally ill patients and others deemed to be a threat to German blood; after World War II eugenics was largely discredited because of its relation to Nazi ideology and practices.61
We might be tempted to dismiss eugenics as a political misuse of science, insofar as it was promoted and applied by people who were not scientists, such as President Roosevelt or Adolf Hitler, or by men who worked in eugenics but were not trained in genetics, such as the superintendent of the Eugenics Record Office, Harry Laughlin, who testified in the US Congress on behalf of eugenic-based immigration restrictions.62 But that only gets us so far, insofar as eugenics was developed and promoted to a significant extent by biologists, and by researchers who came to be known as “eugenicists.” Moreover, like Clarke’s Limited Energy Theory, eugenics was presented as a deduction from accepted theory, in this case Charles Darwin’s theory of evolution by natural selection. If, as Darwin argued, traits were passed down from parent to offspring, and fitness was increased by the differential reproduction and survival of fit individuals, then it stood to reason that the human race could be improved through conscious selection. Darwin had developed his theory of natural selection in part by observing selective breeding by pigeon fanciers: breeding was the deliberate and conscious selection of individuals with desirable traits to reproduce and the culling of those with undesirable ones. If breeders improved their pigeons, dogs, cattle, and sheep through selection, was it not obvious that the same should be done for humans? Should we not pay at least as much attention to the quality of our human offspring as of our sheep? And was it therefore not equally obvious that society should take steps to encourage the fit to reproduce and discourage the unfit? This latter question was famously posed by Thomas Malthus in the eighteenth century, who argued against forms of charity that might encourage the poor to have more children, and who, through his arguments about the inexorable mathematics of reproduction, inspired Darwin.63
The founder of “scientific” eugenics is generally taken to be Darwin’s cousin Francis Galton (1822–1911) and many elements in Darwin’s work seemed to support the view that the laws of selection that operate in nature must also operate in human society. In The Descent of Man (1871), for example, Darwin made clear that he believed that natural selection appli
ed to men as well as beasts, and he argued that some human social practices, such as primogeniture, were maladaptive. It was not a stretch to read Darwin as suggesting that human laws and social practice should be adjusted to account for natural law.
For humans, Galton argued, the most important trait was intelligence, and so Galton undertook the study of intelligence and heredity. Many physical traits, such as height, hair, skin and eye color, and even overall appearance, seemed to be largely inherited, but was intelligence? In his 1892 work Hereditary Genius, Galton concluded that it was. Analyzing the family trees of “distinguished men” of Europe, he found that a disproportionate number came from wealthy or otherwise notable families. While he recognized that “distinction” was not the same as “intelligence,” Galton used it as a proxy. Finding that distinctions of all types—political, economic, artistic—did cluster, he concluded that character traits ran in families just as physical ones did.64
Galton, however, observed one crucial difficulty, what he called the law of reversion to the mediocre: that the offspring of distinguished parents tended to be more mediocre—that is to say, more average—than the parents. He illustrated this with height: tall couples gave birth, on average, to children who were not as tall as they. In an early insight into what we would now call population genetics, Galton reasoned that the children were inheriting traits not only from their parents, but also their grandparents and great-grandparents, i.e., their entire family tree. The same would be true of any trait, including intelligence.
Galton’s conclusions regarding the prospects for overall improvement of the human race were thus pessimistic, because if offspring inherited from their entire family tree then improvements would take many generations to achieve. Pigeon fanciers and dog breeders did not achieve their results in a single generation, but by patient selection over years and decades, and for human breeding this was unrealistic. Galton did suggest vaguely that “steps” should be taken to encourage the “best” to procreate more and the “worst” to procreate less so as “to improve the racial qualities of future generations” and avoid “racial degeneration.”65 Such non-coercive encouragement came to be known as “positive eugenics.” But Galton was not optimistic that a program of eugenics could be readily or reasonably achieved. The law of reversion to the mediocre seemed to undermine such aspirations. Others, however, insisted not only that human improvement through breeding could be achieved, but that it needed to be.