The Science Versus Morality Conundrum
Science and morality occupy different domains. Science is about what “is”; morality about what “ought” to be. What ”is” does not allow inferring what “ought” to be. David Hume, a Scottish Philosopher in the 18th century, showed conclusively that direct inference from observations cannot lead to moral statements. Others have expanded on his reasoning. In 1903, the British philosopher G.E. Moore coined the term “naturalistic fallacy” for unsupported inference from observations to moral guidance.
One could be tempted to attribute moral value to behavior that occurs in nature. An understanding of evolutionary biology does not support such claims. Nature can be cruel. Diseases, pests, and predators evolved for their own survival. Moral philosophy is created by humans and its scope is largely human behavior. To be sure, humans may learn from other species. However, there is no reason to assume that behavior found in nature must be morally good in a human context.
It may then appear that we cannot say anything about morality with any kind of objectivity. This is where the situation becomes paradoxical. Few scientists or philosophers would call the end of life on Earth a morally neutral event. Nuclear technology, climate change, and artificial intelligence, among other existential threats, can result in the eradication of all life. A meteorite could too. However, a meteorite is not created by humans. Humans did create nuclear, climate-impacting, and AI technologies. Scientists and technologists continue to advance them. If the outcome of an eradication of life on Earth is objectively bad, it is hard not to consider actions causing it as morally bad.
Did this reasoning involve the naturalistic fallacy of concluding from is to ought? This question could well spawn a moral philosophy article. It did not involve deductions from observations. However, from a strictly logical perspective it could be considered as committing the fallacy of denying the antecedent. “If there is life, moral value exists,” but if there is no life left, we cannot say anything about moral value. Based on strictly logical reasoning, we cannot infer that an outcome without life must be immoral. However, it would take substantial cynicism to claim that this logical argument has practical relevance.
From the prediction of a doomsday event one could attempt to derive moral guidance for preventing the event. It may be hard to make conclusive statements on this basis. The scientists working on the Manhattan Project could have said: “Because of the risks of atom bombs we should not build them.” Other countries may have built them anyway. An attempt at using the avoidance of a catastrophic event for moral guidance would rely on results from social sciences. It would also benefit from experience with moral philosophy. However, the problem already has a definition of moral value. It would not rely on moral philosophy for a separate definition. There is no need or motivation for committing the naturalistic fallacy, once doomsday avoidance has been set as the goal.
Survival is at the foundation of life sciences and arguably sciences more generally. Natural selection is also called the “survival of the fittest.” Without survival of any life, there is no evolution. Without life there can be no science or moral philosophy. There is no basis in science or philosophy for arguing that an eradication of all life may constitute a moral value. One could expand the role of science in moral philosophy by arguing that even a doomsday event that preserves some life is objectively undesirable. Any sufficiently impactful catastrophic event leaves future generations worse off than the generation of those who enabled the event. That in itself creates an objective unfairness from one generation to future ones. The next post will discuss this reasoning in more detail.
The possibility of a life-ending event has occupied human imagination for as long as history has been recorded. Many religions discuss end-of-world scenarios. Since the development of nuclear bombs at the latest, humanity itself has had the power to cause such an event. The physicists involved in the Manhattan Project were well aware of their work’s moral significance. In 1947 the Bulletin of the Atomic Scientist created the Doomsday Clock as a symbol of nuclear technology's potential for causing a catastrophic event. Global risks have fluctuated over the years. In 2007, climate change was added to the risks that are considered. The clock is currently set to 90 seconds before midnight, closer than it has ever been in its history.
It may seem as if the possibility of a man-made doomsday event does not allow for much moral inference. Avoidance of a catastrophic outcome is a blunt tool and it may seem that any conclusions would be trivial. However, scientists routinely use minimal information for constructing complex theories. The special theory of relativity was deduced from the observation that the speed of light does not depend on the speed of the reference frame. That information is all it takes to conclude that classical mechanics does not work at high relative speeds. It directly led to the development of the special theory of relativity. (https://annedenton.substack.com/p/the-paradox-of-the-downward-leading)
It is perfectly realistic to use the possibility of a man-made doomsday for deciding on preventive steps. To some extent this is being done. Consider the possibility of a climate-induced catastrophic event. A collective reduction in greenhouse gasses is recognized as an important political goal. Some may object to calling the goal “moral". Using the term “moral philosophy” may be yet more contentious. Even if we were to view climate goals as moral, there is no reason to assume that they provide answers to all moral questions. Or that they can be achieved. Ultimately there is little ambiguity that the prevention of a doomsday event constitutes a moral value. Values do not have to be all-encompassing or achievable to be considered moral.
In the context of existential risks, is and ought are inextricably linked. A man-made doomsday event is possible because of science and technology and it ought not happen. Avoiding it is a challenge that is simultaneously scientific and moral. It cannot be fully addressed without using quantitative tools, no matter how incongruous the terms quantitative and morality may sound. The usefulness of quantitative specifications of moral problems may be more obvious for climate impacts than for nuclear technology. Once those involved in the Manhattan project saw the result of their work, there was nothing they could do to make the science unknown. Scientific research cannot be undone. At that point, decisions about the technology were largely outside their control. Nuclear scientists did and still do try to affect proliferation decisions, but their role is limited. Climate change is different. It involves a large number of technological decisions over an extended timeframe.
Why then do so few scientific publications address morality? Technological enthusiasm may account for part of the answer. Those who create problems may not be best at identifying them. The more important reason is likely to be financial conflict of interest. The livelihood of academic scientists typically depends on external funding. Such funding is ultimately under the control of political or commercial entities. That is true even for the US National Science Foundation, which is known for its high professional standards in awarding grants. The overall NSF budget is signed off by the US Congress. Individual grants may be handled non-politically. Yet, the overall direction of the organization is subject to political approval.
In the domain of artificial intelligence, it has recently become more acceptable to study ethical impacts. The professional risks are still high. In 2021 Emily Bender and others published an article that discussed problems with large language models: “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” It discussed the ethics of models that are central to the basic operation of search engines. Encoding bias was one concern. Language models are trained on text that is available online. Online documents inevitably include societal biases. Another concern addressed in the article related to climate impacts. The training of large language models is highly energy-consuming. The authors cite a study that estimates the energy consumption of a single training run of a large language model to be comparable to a trans-American flight.
In the wake of the publication of the above article, two of its coauthors, Timnit Gebru and Margaret Mitchell, lost their jobs at Google. Researching moral implications is risky for scientists. Not doing so is risky for science. If a man-made problem, whether nuclear-, climate-, or artificial-intelligence-related becomes catastrophic, questions of responsibility will be asked. Advanced technologies would not exist without past scientific research. At least some moral responsibility for researching and building technologies rests with those who enable them. Generically offloading moral questions to the discipline of moral philosophy is not an option. Climate impacts of technology may appear like a minor side effect to some. Yet, side effects can catch up and overtake the primary objectives in importance.
We ought not create a man-made doomsday. Few people would directly object to that statement. Some may question the science that reveals the risks we are facing. Ironically, few people ever doubt the validity of the science that enables a technology in the first place. Some may question if the term “morality” is appropriate. The term has a long history that includes religious studies. Religious scholars do not normally support the idea that aspects of morality can be quantified. “Ethics” is more commonly used for questions of appropriate behavior in science and engineering. Ultimately those are issues of word choices and subject delineations. The essence should not be questionable: We ought not destroy our ability to exist.