Harms (Part 3: Delay)

Our current predicament stems from the rapid growth of humanity’s power outstripping the slow and unsteady growth of our wisdom. If this is right, then slowing technological progress should help to give us some breathing space, allowing our wisdom more of a chance to catch up.

Toby Ord, The precipice
Listen to this post

1. Introduction

This is Part 3 in my series Harms. Risk mitigation has potential harms, as well as benefits. This series aims to chronicle some of the harms that existential risk mitigation may bring about, so that the value of risk mitigation efforts can be properly assessed.

Part 1 looked at the risk of distraction. Part 2 looked at surveillance.

Today’s post looks at efforts to delay potentially beneficial technologies, on the grounds that these technologies are unacceptably risky. I argue that even if effective altruists are correct that the risks of progress in some areas outweigh the benefits, the costs of delaying technological progress may nonetheless be substantial and should be accounted for.

2. Technology and progress

For all its ills, technology has brought enormous benefits. Let’s recall some recent technological inventions that readers may benefit from.

Extreme poverty has gone in a few short centuries from the normal human condition to an increasingly uncommon state.

For all that philanthropists, including effective altruists, have done to reduce poverty, the largest contributor has been technology and the accompanying growth in economic outputs. Indeed, there is a striking correlation between extreme poverty and per-capita GDP.

Life expectancies have almost doubled in the past few centuries alone.

Although we have not yet broken free from the five-day workweek, working hours have steadily declined with the advance of technology.

Not only are our lives getting better, but there are also more of us around to reap the benefits.

I will not pretend that technology brings no ills, or that technology is solely and completely responsible for all of these trends. But it should not be denied that technology has steadily enabled the Earth to support many more people, to support them for longer, and to enable them to flourish in previously unimagined ways.

3. Cause for delay: Differential technological development

Longtermists acknowledge the vast benefits that technology may provide. Nevertheless, they are concerned that some technological developments carry risks that are not worth the benefits they may bring. Longtermists advocate a strategy of differential technological development through which development of risk-inducing technologies is slowed, and development of safety-inducing technologies is accelerated.

Here is the classic statement of differential technological development, due to Nick Bostrom (2002):

Our focus should be on what I want to call differential technological development: trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies.

The idea of differential technological development is echoed in other key texts. For example, here is Toby Ord in The precipice:

An interesting, and neglected, area of technology governance is differential technological development. While it may be too difficult to prevent the development of a risky technology, we may be able to reduce existential risk by speeding up the development of protective technologies relative to dangerous ones. This could be a role for research funders, who could enshrine it as a principle for use in designing funding calls and allocating grants, giving additional weight to protective technologies. And it could also be used by researchers when deciding which of several promising programs of research to pursue.

How might differential technological development be achieved? A recent paper by Jonas Sandbrink (formerly of FHI) and colleagues suggests several strategies for delaying the development of risky technologies. For example, we might defund research into risky technologies; issue moratoria or outright bans on further research; pursue social advocacy; initiate divestment campaigns; or propagate social norms against risky research.

The same paper considers a number of strategies for advancing risk-reducing technologies. For example, we might: fund or directly develop risk-reducing technologies; offer prizes for development of such technologies; provide tax incentives for risk-reducing technologies; or coordinate development of risk-reducing technologies across key researchers and stakeholders.

Today’s post will not focus on strategies for advancing risk-reducing technologies, which may escape some of the concerns raised here. My concern here will be with delaying the development of risky, but potentially beneficial technologies.

4. Signs of delay

The move to delay welfare-enhancing technologies is not confined to longtermist theorizing. Longtermists are actively striving to delay major technological developments today.

The Future of Life Institute circulated a widely-publicized open letter calling for a pause on the training of powerful AI systems signed by luminaries including Yoshua Bengio, Stuart Russsell, and Elon Musk. This letter grew into a full-fledged movement for an AI Pause, with its own website and over a dozen protests carried out to date. One of the first protests was led by effective altruists directly after an EA Global conference in London. Feature-length pieces in TIME, the New York Times and other publications echoed the call for a pause.

That didn’t happen, and longtermists quickly realized that calls for an outright pause on AI development were politically infeasible. Longtermists increasingly turned their sights from pause to delay. Discussions are increasingly populated with ideas for slowing AI development. These ideas have been coupled with a massive political push in Washington, London, and elsewhere. They have also entered Silicon Valley boardrooms, as witnessed by the boardroom drama at OpenAI last year.

Longtermists also urge delay of other technologies beyond artificial intelligence. Most obviously, longtermists have urged delay in gain-of-function research, genetic synthesis and other dual-use biotechnologies that can be used by malicious actors to cause harm.

However, longtermist calls to slow technological progress often take far wider scope than this. For example, early longtermists were deeply concerned about progress in nanotechnology. After his original, 2002 statement of differential technological development, Bostrom goes on to illustrate the principle in the very next sentence through its application to nanotechnology:

Our focus should be on what I want to call differential technological development: trying to retard the implementation of dangerous technologies and accelerate implementation of beneficial technologies, especially those that ameliorate the hazards posed by other technologies. In the case of nanotechnology, the desirable sequence would be that defense systems are deployed before offensive capabilities become available to many independent powers; for once a secret or a technology is shared by many, it becomes extremely hard to prevent further proliferation.

Here the call for differential technological development gains wider, and some might worry more problematic scope: in hindsight, early longtermists’ concerns about nanotechnology look to have been overstated.

For another example of the scope of calls to restrict technological development, a senior grantmaker at Longview Philanthropy recently called for abandoning the SETI program:

Such widened calls for delay go far beyond an urging to slow the progress of powerful AI systems. As calls for delay widen, their costs grow. Let’s close by reflecting on the cost of delay.

5. The cost of delay

So far, we have seen that longtermists advocate delaying a number of technologies that may carry enormous benefits for humanity, including artificial intelligence and synthetic biology. Perhaps these delays may be warranted by the need to reduce risk. But even if these delays are warranted, this does not change the fact that they introduce tangible harms which must be counted against the value of existential risk mitigation efforts.

For example, despite everything that philanthropists have done to reduce extreme poverty, it is plausible that the strongest driving force in the reduction of extreme poverty has been economic development, facilitated in large part by new technology. If that is right, then poverty reduction is one area in which technology may have provided at least as much benefit as billions of dollars of donations by effective altruists and others ever have.

Developments in artificial intelligence and other technologies may bring vast benefits, potentially comparable to or even exceeding the benefits of extreme poverty reduction. If that is right, then restrictions on the development of artificial intelligence may have harms greater than everything effective altruists have ever done to combat poverty. This is not a small harm, and we would be remiss if we were to weigh longtermist interventions against short-termist competitors without considering such a harm.


Posted

in

by

Comments

4 responses to “Harms (Part 3: Delay)”

  1. titotal23 Avatar
    titotal23

    Enjoy the post as usual, although this is one of the harms I am least concerned about.

    I worry that just as doomers overstate the harms of AI, anti-doomers are also overstating the potential benefits. I’ll see people say that “AI will solve climate change”, without specifying what, exactly, the AI is going to do that’s not being done already, for a problem that is mostly about politics, funding and logistics.

    It seems to me to be plausible that AI development will hurt the global poor, at least in the short term. First, through it’s huge carbon footprint accelerating climate change, and second through the automation of low-skilled labour, cutting off one of the few streams of money and investment flowing from the first to third world.

    I think it’s easy to explain the benefit of electricity or internet, to a poor farmer in the developing world, for health, wealth and education. I struggle to see an equivalent benefit for the AI technology of the near future.

    1. David Thorstad Avatar

      Thanks titotal! As usual, it’s good to hear from you.

      I think you are certainly right that many people in the EA-adjacent space overstate the potential benefits of AI. Many folks are quite confident that AI will either kill us or save us (I have sometimes called this a “catastrophotuopian view”), whereas I would like to leave significant room for a variety of more mundane outcomes in between these.

      I am also quite worried about the short-term harms of AI, including the harms that you mentioned: climate change and short-term labor market shocks.

      I think that perhaps one way to contextualize this post would be in the context of a discussion with many folks who think that artificial intelligence is likely to be very transformative, and worry that it will kill us. Those folks will probably be less able to say that the potential foregone benefits of artificial intelligence are minimal, since they have quite ambitious expectations for what the future of artificial intelligence will bring.

      It might also help to think about technologies beyond artificial intelligence. For example, many longtermists would like to put the brakes on other areas such as synthetic biology that could have important applications, such as the development and delivery of medical treatments.

      But in general, yes, I think it is important to bear in mind that there is no one harm in this series that will singlehandedly scuttle the case for longtermism. If you want to read this as just one harm among many, and one that some readers will be more concerned about than others, I don’t think that would be unjustified.

  2. Jason Avatar
    Jason

    Concerns about active approaches to SETI have been around for a while within the mainstream of thought — e.g., this 2006 opinion piece in Nature, https://www.nature.com/articles/443606a, and this 2015 reference to concerns raised by by Hawking and to a panel discussion at an AAAS conference: https://phys.org/news/2015-02-cosmos-risky.html .

    I’ve always thought those concerns quite valid — for in-person contact, the alien civilization would need to be much more advanced than we are. If we use known human civilizations as a reference point, there are many of those for which I wouldn’t want to be a member of a technologically underdeveloped civilization encountering them!

    1. David Thorstad Avatar

      Thanks Jason!

      To be honest, I also share some part of Tyler’s concern about the SETI program. More generally, I don’t think that it’s unreasonable to be concerned about the SETI program. Tyler is generally a reasonable guy.

      Something that is very important to me in this series is to separate the question of whether or not a policy is justified from the further question of what that policy will cost. In this series, I’m only interested in the question about costs. I think that many longtermist policies have very real costs that are not often discussed, and that it can be valuable to discuss and measure these costs even for those who ultimately think that the costs are worth paying.

      One thing that readers might take away from the discussion of SETI is a broader point that it’s not simply AI which longtermists are aiming to slow down. Many longtermists aim to slow down a broad range of technologies, and that increases the cost of delayed technological development.

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading