Cambridge, Massachusetts – It is always worth remembering that in the grand sweep of history, we are the fortunate ones. Thomas Hobbes’s description of life as “solitary, poor, nasty, brutish and short” was apt for most of human history. Not anymore. Famines and hunger have become rarer, living standards for most people have risen and extreme poverty has been reduced substantially over the past few decades. Average life expectancy at birth even in the least healthy parts of the world is above 60 years, whereas a British person born in the 1820s would have expected to live to around 40.
But, these fantastic improvements have been accompanied by catastrophic risks. Even if COVID-19 has shaken us from our complacency, we have yet to grapple with the dangers still facing us.
The improvements of the past 200 years are the fruits of industrialization, made possible by our acquisition of knowledge and mastery of technology. But this process involved trade-offs. Driven by the desire for wealth, firms and governments sought to reduce costs and boost productivity and profits, which led to disruptions that sometimes left hundreds of millions of people impoverished and unemployed.
For decades, workers in mines and factories were brutally coerced to eke out ever more output, until they managed to organize and secure some political power for themselves. And, of course, the early industrial age encouraged slavery and the quest for access to natural resources, which led to massive wars and brutal forms of imperialist rule.
These excesses were neither an aberration nor inevitable. Many have since been corrected through the market economy, labor relations reforms, state regulation and new (often democratic) institutions. But other significant unintended consequences of industrialization have yet to be addressed, because no organized political constituency emerged to address them. The most pressing concern is catastrophic global risks, the most obvious being anthropogenic climate change — a prime example of how a process of enrichment can create an existential threat.
A second, somewhat related problem is biodiversity loss. The estimated rate of species extinction today is anywhere from 100 to 1,000 times that of the pre-industrial era, yet there is still very little recognition of the risks created by such a radical destabilization of nature.
The third global risk is nuclear war. Splitting the atom exemplifies both our mastery over nature and the potential for profound misuse of science and technology. Though nuclear technology has many peaceful applications (and may have a short-term role to play in addressing climate change), its most important consequence has been to inaugurate an era of mutually assured destruction. As with climate change and biodiversity loss, we still do not appreciate the risks that nuclear technology poses to humanity; in fact, countries that have nuclear arsenals are now rebuilding and expanding them.
A fourth major risk is artificial intelligence, which could lead to technologies that we cannot control. In addition to the risk that superintelligent algorithms wipe out humanity, AI also has the potential to be deployed as an instrument of surveillance and repression, paving the way to a new kind of serfdom. And governments are already developing AI and autonomous weapons that could be put to all kinds of nefarious uses, especially if they end up in the wrong hands.
Though no one can deny these risks, most people’s first instinct is to discount steeply the likelihood of a catastrophic scenario. But this is misguided. During the 20th century, the world came close to nuclear war on multiple occasions. Because we were lucky, we now assume retrospectively that the risk was never as high as it seemed.
But consider the counterfactual scenario. Where would we be today if all-out nuclear war had not been averted by the actions of Vasili Alexandrovich Arkhipov, a lone second captain who, at the height of the Cuban Missile Crisis, urged restraint when the other commanders aboard his Soviet nuclear B-59 submarine mistakenly believed they were under attack by the United States? We certainly wouldn’t be reading books about the supposed decline in violence over time.
On the other hand, those who do recognize the dangers posed by climate change and AI too often jump to the conclusion that economic growth itself is the problem. They argue that reducing emissions, preserving nature, and preventing the misuse of technology requires a deceleration or reversal of production, investment and innovation.
But pulling back from growth and technological progress is neither realistic nor advisable. The world is still a long way from ending poverty, and what people in both rich and poor countries need most right now are good jobs that leverage technology in the interest of workers themselves. Without secure employment and income growth, U.S. President Donald Trump and British Prime Minister Boris Johnson will not be the last right-wing demagogues to threaten established democracies.
The only responsible option is to forge a new growth strategy that emphasizes the kind of technological innovation needed to address global threats. The goal should be to create a regulatory environment that encourages firms and entrepreneurs to develop the technologies we actually need, rather than those that merely increase profits and market share for a narrow few. And, of course, we need a much greater focus on shared prosperity, so that we do not repeat the errors of the last four decades, when growth became decoupled from most people’s lived experience (at least in the Anglo-Saxon world).
Although our track record in combating climate change is poor, we can embrace the fact that once-costly forms of renewable energy are now competitive with fossil fuels. This did not happen because we turned our back on technology. Rather, it is the outcome of technological advances brought about by a regulated market economy in which firms responded to carbon pricing (especially in Europe), subsidies and consumer demand.
The same recipe can work against other catastrophic risks. The first step is to acknowledge that these risks are real. Only then can we get on with the business of building better institutions and re-empowering the state to shape market outcomes with humanity’s shared interests in mind.
Daron Acemoglu is a professor of economics at MIT. © 2020, Project Syndicate
In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.