Why Trust Science? Page 17
Even more revealing, in the summer of 1957 the president of the History of Science Society Henry Guerlac refused, at an infamous meeting, to even consider opening the society meetings or journal to scholars interested in the history of technology. As it was described in a 1998 appreciation of historian of technology Melvin Kranzberg, “Guerlac’s refusal to allow either, lest the history of science be tainted by such alleged intellectual inferiority—by attention to lowly ‘tinkerers’ rather than great ‘thinkers’—is what spurred Mel [Kranzberg] to establish a separate society with its own journal. The experience reinforced his growing conviction that the history of technology required and deserved autonomy as an intellectual endeavor.”11
Thus emerged a sharp, highly policed line between pure science and impure technology. Both terms seemed to reference something grand and self-evident: Scientists made knowledge, technology was only knowledge applied. This idea placed those physicists who made the bomb in the arena of truth of nature—with their theories of atomic structure—and allocated the actual weapon itself to engineers, who were beneath historical attention (for historians of science). Biological weapons, chemical weapons, and ICBMs were all “technology,” not science. Thus the idea of a clear and inviolable distinction between science and technology was promoted in ways that placed science in a morally privileged position.
Indeed, the Johns Hopkins geophysicist Merle Tuve, who led the effort to develop a new proximity fuse in 1941 at the Applied Physics Laboratory, saw the distinction between science and technology as a central part of his own scientific identity. In her 2012 paper, Wang has beautifully characterized the logic and practice of this idea in Tuve’s professional life. As Tuve told an audience in 1958, “Science is not airplanes and missiles and radars and atomic power, nor is it the Salk vaccine or cancer chemotherapy or anticoagulants for heart patients. These all are technological developments.… Science is knowledge of the natural world about us … it is the search for new knowledge about the marvelous world in which we find ourselves.”12 Note that Tuve even left out the Salk vaccine!
We can sympathize with his desire to draw this line—he was a scientist who helped make weaponry—but we need to recognize its historical specificity, and its limitations as a complete explanation of the relevant relationships between different forms and iterations of technical knowledge. It is also important to understand that the forces and tensions that animated this boundary work were deep and oppressive for those who managed professional lives as scientists in the Cold War in the United States. The stakes were very high for scientific professionals.
In a heartfelt July 1954 letter to a powerful Atomic Energy Commissioner, for example, the Yale University biophysicist Ernest Pollard described how he learned to keep secrets. “Many of us scientists learned the meaning of secrecy and the discretion that goes with it during the war,” Pollard said. “We had very little instruction from outside.” When the war was over, he made a conscious decision to avoid secret research. He “thought carefully through the problems of secrecy and security” and made the decision to handle only material that was entirely open. “I returned one or two documents I received concerning the formation of the Brookhaven Laboratory, in which I played a small part, without opening them.” But the outbreak of the Korean War, in June 1950, and his own concerns about the Soviet Union, led to a change of heart. He came to feel that “I as a scientist should pay a tax of twenty percent of my time to do work that would definitely aid the military strength of the United States.”13
In the process, as he engaged in secret research during the Cold War, he learned a form of extreme social discipline that he called “the scientists morality.” “I have learned to guard myself at all times, at home, among my family, with the fellows of my college when they spend convivial evenings, with students after class asking me questions about newspaper articles, on railroad trains and even in church. It has been a major effort on my part, unrelenting, continually with me, to guard the secrets that I may carry.”14
Pollard’s comments resonate with those of many other experts in the heart of the Cold War in the United States. Being a scientist often meant concealing one’s work and ideas from friends, family, students, colleagues. An enterprise founded on an ideology of openness and free exchange became increasingly oriented around keeping secrets.15 Individual scientists could lose their jobs if they lost their security clearances.16 And security clearance could be withdrawn for a wide range of infractions, including accepting dinner invitations from people who were members of the Communist Party.17
Scientists even lost jobs for refusing to testify when called before the House Committee on Un-American Activities by Senator Joseph McCarthy (R-Wisconsin).18 The physicist David Bohm, who lost his assistant professor job at Princeton for this reason, went on to make illustrious scientific and philosophical contributions under difficult circumstances in Brazil and later, in the United Kingdom.19
Scientists were also harassed by the 1950s equivalent of Twitter trolls. Henry deWolf Smyth, who voted to permit the Princeton physicist J. Robert Oppenheimer to keep his security clearance (seen by some as an unpatriotic act), received a threatening letter from an “Angry American Family” who promised “some day we Americans will catch up to all of you traitors.”20 The geneticist Arthur Steinberg was also subject to shocking public attacks, losing a deal on a house and several jobs because of inaccurate reports that he was a Communist. Steinberg gave only thirty-five documents to the historical archives at the American Philosophical Society (APS) in Philadelphia. All of them chronicled the cruelty with which he was treated after being accused.21 A January 1954 letter from his attorney to a housing development where Steinberg and his wife had attempted to purchase a home described “the anonymous phone calls which my client received from some neighbors threatening dire consequences if they lived in the house.” A 1948 letter from a colleague openly stated that Steinberg had been removed from the list of viable candidates for a job, because departmental faculty had heard about “the Communist charges.” In selecting what he chose to donate to APS archives, Steinberg clearly intended that his painful experiences not be forgotten.22
Other scientists made quiet bargains, parsing out their time, so that some percentage of their professional life went to “pure science,” and some to defense-related work in the name of patriotism. Like Pollard, they drew the boundaries of their professional lives with personal calibrations of responsibility, with many kinds of lines in the sand. MIT biologist Salvador Luria announced in 1967 that he would not work on any defense projects in protest against the Vietnam War.23 More judiciously, in 1969, Ronald F. Probstein and his fellow researchers at MIT’s Fluid Mechanics Laboratory made what they called a “directed effort to change” their research. They reduced the amount of military-sponsored research they were doing from 100% to 35%, with the remaining 65% explicitly devoted to “socially oriented research.” The point was not to sever all ties to military research. Rather, they wanted to “redress an imbalance.”24
These experiences shed some light on the struggles and strategies of the rank-and-file experts who fueled economic growth and facilitated national defense in the heart of the Cold War in the United States. They learned to keep secrets, lie, and pass polygraphs. They shared tips about what to say in security clearance hearings, how to burn trash, managing selective service requirements, concealing the military relevance of a project, and managing the anger of their peers. They became vulnerable to science swerved by defense interests, to possible prosecution or fines, even deportation, and to the skepticism of their peers either because their peers believed them to be disloyal, Communists or socialists, or because their peers viewed them as overly dependent on defense funding. They also learned how to make things and ideas that produced massive human injury. The professional and personal stakes were high; the risks real, the embeddedness of knowledge in the state’s monopoly on violence profound.25 Within the scientific community, who could be trusted, both as an expert wi
tness to nature’s ways, and as a proper patriot, on the right side of a global ideological war? And in the broader civic world of political and social order, how could trust in the scientific community be sustained, when scientists made bombs, concealed from the public the nature of their work, and turned on each other in such a toxic political environment? Secrecy does not usually engender trust. And infighting within a community can undermine its legitimacy.
Oreskes suggests that science can be trusted because flawed scientific ideas in the past were subject to contemporaneous criticism—there were individuals objecting to scientific theories later found to be inaccurate, for example about women’s bodies, eugenics, or plate tectonics. Their existence—their voiced public objections—exemplify in some way the self-correcting properties of scientific knowledge, she proposes. In practical terms, this might not be as powerful an argument as it has long been presumed to be. And more ominously, minority objections from people who hold PhDs to climate science, evolution, and other scientific ideas are not unheard of today. Indeed, it is entirely possible to find PhDs in astronomy who have believed that aliens came to earth some time ago, and now live among us.26 Singular voices of any kind are not necessarily reassuring.
The activities of the “alt-science” advocate Art Robinson, a trained PhD who once collaborated with Nobel Prize winner Linus Pauling (and who seems to have made a career out of proclaiming that Pauling was “wrong”), suggest just how diverse the scientific community is in practice.27 The existence of diverse voices means only that the credentialed community of knowledge producers can and does include people who think very differently. What bearing do such differences of viewpoint and opinion have on the legitimacy of public trust in science? Oreskes places consensus at the heart of her analysis, proposing that science is a collective accomplishment, and that this collectivity is the source of its legitimacy and power. It comes into being through a process, with twists and turns, dead-ends, disputes, and resolutions, and the messiness of this process is a virtue, rather than a flaw. As she tracks the disturbing current state of affairs, Oreskes shows how much science now needs defenders, and defenses. We may have lost, she proposes, the Enlightenment vision of trustworthy natural knowledge that could be made by human (and inherently flawed) actors through disciplined rules of testing and experiment. But we still retain the power of consensus and reason. This kind of argument is utterly persuasive to me. It may not be to some of those most determined to disbelieve the findings of evolution or climate change or vaccine science. And yet many people do trust science. They may not fully recognize that deep trust, because science has been so resolutely distinguished from technology, for so long, with such intensity. It is possible that most citizens in industrialized societies have a kind of subterranean, unreflective trust in technical knowledge because “it works” and because it is, almost literally, the texture that defines their lives, every hour of every day. One of the struggles of all social theory is to find a perspective from which the waves and gravity can be detected, the water we swim in experienced—a problem Einsteinian in its dimensions. Where should we stand to understand the problem of trust? What are the right questions?
As the science studies scholar Donna Haraway put it in a famous 1988 paper, “situated knowledge” reflects the politics and epistemologies of location (in every sense of the term). Science makes “claims on people’s lives” and there should be room for views of nature in which “partiality and not universality” is seen as rational. The “view from a body,” she proposed, is paradoxically more powerful than “the view from above, from nowhere, from simplicity.”28 Haraway was responding to the difficulties feminist scholarship faced in relation to “objectivity.” Some feminist scholarship at the time seemed to be engaged in a brutal “unveiling” project, which would demonstrate that scientists were irretrievably biased, that knowledge (for example knowledge supporting the idea of female inferiority) was contaminated by social beliefs, and that even (most terrifying) perhaps there was no objective knowledge at all, no position from which truth could be seen. These forms of “high” social constructivism, in which knowledge was reduced to social interests and hopelessly contaminated, threatened to reduce technical knowledge to irrelevance, a threat that Haraway and many other scholars found disturbing. How could feminist theory facilitate an understanding of a real world that could be friendly to human needs if it rejected the possibility of legitimate knowledge?29 Feminist objectivity, she proposed, “makes room for the surprises and ironies in the heart of all knowledge production; we are not in charge of the world.”
As recent events have made clear, the scientific community is decidedly “not in charge of the world.” For at least a century, it has wrestled with the growing relevance of virtually all scientific fields to the “garrison state.” Now it finds a new kind of embeddedness, in a different kind of state power. The experiences of scientists in the Cold War, as they navigated totalizing systems of political control, have a new, chilling resonance. Their situations—their positions and embeddedness—help us see some of the ironies of knowledge production. None of the scientists I consider in this essay had the option of opting out completely. It was not possible to do so and to continue to engage in scientific labor. Even if they did not do defense work, they were training students who would. How, then, did they make sense of their predicament? And what can their predicaments teach us about the predicaments of scientists today?
The generation I focus on had learned in the course of their formal education in the 1930s that science was open, universalistic, internationalistic, and an endeavor focused on the “welfare of mankind.” Yet in practice, in the heart of the Cold War, for many scientists, their research was not open but secret, not internationalistic but nationalistic, and not conducive to welfare but engaged with the sophisticated technical production of injury to human beings. These forms of injury were realized through new weapons, new surveillance methods, new information systems, even new ways to interrogate prisoners using psychological insights, bring down economies, or start epidemics—“public health in reverse”—biological weapons. Experts in fields from physics to sociology found their research calibrated to empower the state, and scientists trained to see themselves as creating knowledge as a social good found themselves engaged in something that felt very different to them. Professional societies from the American Association for the Advancement of Science, to the American Society of Microbiology, to the American Chemical Society created committees on “social issues” and produced statements on science and the “welfare of mankind” through the 1950s, ’60s, and ’70s. Meanwhile their members made weapons and worked in the defense industry.
The profound struggles around science and violence in the twentieth century animated public strategies that corralled science, and moved it safely out of the kitchen, the clinic, the urban street, or most importantly, the battlefield. By drawing sharp boundaries between pure and applied knowledge, many scientists pursued a sometimes morally encoded strategy intended to preserve the pure core of technical truth, the innocence of pure science, by isolating it from technological things like guns, gunpowder, bombsights, nuclear weapons, chemical weapons, and psychological warfare. This strategy often enforced a hierarchy separating scientists from engineers, physicians, and other experts who “got their hands dirty.”
I would suggest that it may also have played a role in obscuring from general view the saturation of everyday life today with scientific knowledge.
Oreskes has called us to think critically about public trust in science. It is clear that the answer is not that science should be trusted because it is always true, right, accurate. It is not always any of those things. But it is trustworthy despite its humanness, its vulnerabilities to misunderstanding, error, misplaced faith, social bias, and so on. It is trustworthy because of the sustained human labor that goes into making it, the integrity of the process, and because it has already transformed human life in so many ways that are obvious, transparent, pr
ofound.
All of the protocols that Oreskes describes are conducive but not determinative of validity. She invites us to think about the fragility of knowledge as a resource for trust, suggesting that people should trust science precisely because it is a system responsive to evidence, observation, experience. This is a stronger argument than perhaps even she realizes, for when we add everyday technology to this potentially trust-generating mix, we find something familiar that can be persuasive. For many people the reliability of everyday technologies might be as close as they ever get to scientific knowledge. But it is closer than it seems.
In his New Yorker review of the historian of technology David Edgerton’s book The Shock of the Old, the historian of science Steven Shapin described himself writing in his kitchen, where he was surrounded by technology: a cordless phone, a microwave oven, and a high-end refrigerator, while working on a laptop. His essay noted that “the texture of our lives would be unrecognizable” without these things made from technical knowledge.30 His comments were inspired by Edgerton’s provocative book, in which he explores the “creole” technologies of what he calls, bluntly, the “poor world.”31 Creole is a term that commonly refers to local derivatives of something from elsewhere—such as cars from 1950s Detroit that are still running in Havana. It is not generally intended to suggest sophistication, innovation, or elite knowledge. But Oreskes shows us that elite science too has properties of “creole” making do, cobbling together data and ideas that can stay on the road, without being perfect and without the advantage of the original parts. It is not an idealized social and intellectual system of pure truth, free of misunderstanding, confusion, or error. It is pretty much the best we can do, and much of time, it works—like that iPhone.