Our current predicament stems from the rapid growth of humanity’s power outstripping the slow and unsteady growth of our wisdom. If this is right, then slowing technological progress should help to give us some breathing space, allowing our wisdom more of a chance to catch up.
Toby Ord, The precipice
1. Introduction
This is Part 3 in my series Harms. Risk mitigation has potential harms, as well as benefits. This series aims to chronicle some of the harms that existential risk mitigation may bring about, so that the value of risk mitigation efforts can be properly assessed.
Part 1 looked at the risk of distraction. Part 2 looked at surveillance.
Today’s post looks at efforts to delay potentially beneficial technologies, on the grounds that these technologies are unacceptably risky. I argue that even if effective altruists are correct that the risks of progress in some areas outweigh the benefits, the costs of delaying technological progress may nonetheless be substantial and should be accounted for.
2. Technology and progress
For all its ills, technology has brought enormous benefits. Let’s recall some recent technological inventions that readers may benefit from.

Extreme poverty has gone in a few short centuries from the normal human condition to an increasingly uncommon state.

For all that philanthropists, including effective altruists, have done to reduce poverty, the largest contributor has been technology and the accompanying growth in economic outputs. Indeed, there is a striking correlation between extreme poverty and per-capita GDP.

Life expectancies have almost doubled in the past few centuries alone.

Although we have not yet broken free from the five-day workweek, working hours have steadily declined with the advance of technology.

Not only are our lives getting better, but there are also more of us around to reap the benefits.

I will not pretend that technology brings no ills, or that technology is solely and completely responsible for all of these trends. But it should not be denied that technology has steadily enabled the Earth to support many more people, to support them for longer, and to enable them to flourish in previously unimagined ways.
3. Cause for delay: Differential technological development
Longtermists acknowledge the vast benefits that technology may provide. Nevertheless, they are concerned that some technological developments carry risks that are not worth the benefits they may bring. Longtermists advocate a strategy of differential technological development through which development of risk-inducing technologies is slowed, and development of safety-inducing technologies is accelerated.
Here is the classic statement of differential technological development, due to Nick Bostrom (2002):
Our focus should be on what I want to call differential technological development: trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies.
The idea of differential technological development is echoed in other key texts. For example, here is Toby Ord in The precipice:
An interesting, and neglected, area of technology governance is differential technological development. While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones. This could be a role for research funders, who could enshrine it as a principle for use in designing funding calls and allocating grants, giving additional weight to protective technologies. And it could also be used by researchers when deciding which of several promising programs of research to pursue.
How might differential technological development be achieved? A recent paper by Jonas Sandbrink (formerly of FHI) and colleagues suggests several strategies for delaying the development of risky technologies. For example, we might defund research into risky technologies; issue moratoria or outright bans on further research; pursue social advocacy; initiate divestment campaigns; or propagate social norms against risky research.
The same paper considers a number of strategies for advancing risk-reducing technologies. For example, we might: fund or directly develop risk-reducing technologies; offer prizes for development of such technologies; provide tax incentives for risk-reducing technologies; or coordinate development of risk-reducing technologies across key researchers and stakeholders.
Today’s post will not focus on strategies for advancing risk-reducing technologies, which may escape some of the concerns raised here. My concern here will be with delaying the development of risky, but potentially beneficial technologies.
4. Signs of delay
The move to delay welfare-enhancing technologies is not confined to longtermist theorizing. Longtermists are actively striving to delay major technological developments today.
The Future of Life Institute circulated a widely-publicized open letter calling for a pause on the training of powerful AI systems signed by luminaries including Yoshua Bengio, Stuart Russsell, and Elon Musk. This letter grew into a full-fledged movement for an AI Pause, with its own website and over a dozen protests carried out to date. One of the first protests was led by effective altruists directly after an EA Global conference in London. Feature-length pieces in TIME, the New York Times and other publications echoed the call for a pause.
That didn’t happen, and longtermists quickly realized that calls for an outright pause on AI development were politically infeasible. Longtermists increasingly turned their sights from pause to delay. Discussions are increasingly populated with ideas for slowing AI development. These ideas have been coupled with a massive political push in Washington, London, and elsewhere. They have also entered Silicon Valley boardrooms, as witnessed by the boardroom drama at OpenAI last year.
Longtermists also urge delay of other technologies beyond artificial intelligence. Most obviously, longtermists have urged delay in gain-of-function research, genetic synthesis and other dual-use biotechnologies that can be used by malicious actors to cause harm.
However, longtermist calls to slow technological progress often take far wider scope than this. For example, early longtermists were deeply concerned about progress in nanotechnology. After his original, 2002 statement of differential technological development, Bostrom goes on to illustrate the principle in the very next sentence through its application to nanotechnology:
Our focus should be on what I want to call differential technological development: trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies. In the case of nanotechnology, the desirable sequence would be that defense systems are deployed before offensive capabilities become available to many independent powers; for once a secret or a technology is shared by many, it becomes extremely hard to prevent further proliferation.
Here the call for differential technological development gains wider, and some might worry more problematic scope: in hindsight, early longtermists’ concerns about nanotechnology look to have been overstated.
For another example of the scope of calls to restrict technological development, a senior grantmaker at Longview Philanthropy recently called for abandoning the SETI program:

Such widened calls for delay go far beyond an urging to slow the progress of powerful AI systems. As calls for delay widen, their costs grow. Let’s close by reflecting on the cost of delay.
5. The cost of delay
So far, we have seen that longtermists advocate delaying a number of technologies that may carry enormous benefits for humanity, including artificial intelligence and synthetic biology. Perhaps these delays may be warranted by the need to reduce risk. But even if these delays are warranted, this does not change the fact that they introduce tangible harms which must be counted against the value of existential risk mitigation efforts.
For example, despite everything that philanthropists have done to reduce extreme poverty, it is plausible that the strongest driving force in the reduction of extreme poverty has been economic development, facilitated in large part by new technology. If that is right, then poverty reduction is one area in which technology may have provided at least as much benefit as billions of dollars of donations by effective altruists and others ever have.
Developments in artificial intelligence and other technologies may bring vast benefits, potentially comparable to or even exceeding the benefits of extreme poverty reduction. If that is right, then restrictions on the development of artificial intelligence may have harms greater than everything effective altruists have ever done to combat poverty. This is not a small harm, and we would be remiss if we were to weigh longtermist interventions against short-termist competitors without considering such a harm.

Leave a Reply