Exaggerating the risks (Part 10: Biorisk: More grounds for doubt)

Of course, it is prudent to take measures to prepare against the possibility of a biological weapons attack and concerted action across a policy continuum that extends from prevention through preparedness to consequence management is necessary. However … any bioterrorism attack will most likely be one using a pathogen strain with less than optimal characteristics disseminated through crude delivery methods under imperfect conditions, and the potential casualties of such an attack are likely to be much lower than the mass casualty scenarios frequently portrayed. This is not to say that speculative thinking should be discounted … however, problems arise when these speculative scenarios for the future are distorted and portrayed as scientific reality.

Jefferson, Lentzos and Marris, “Synthetic biology and biosecurity: Challenging the `myths’“.
Listen to this post

1. Recap

This is Part 10 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.

Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk. Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI.

Part 9 began a new sub-series on biorisk. In Part 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high.

Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach is to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Then I will review the arguments for high levels of existential biorisk that have been provided by effective altruists and argue that they are insufficient.

Part 9 began by reviewing three preliminary reasons why many are skeptical of existential biorisk claims: that engineering biological events that lead to existential catastrophe is a very difficult problem; that biological events have rarely led to mammalian extinction; and that producing all of the needed mutations at once is so difficult that a well-funded (alleged) Soviet bioweapons program didn’t come close.

Today’s post continues this theme by looking at four more preliminary reasons for skepticism: the possibility of effective public health responses; a scarcity of motivated omnicidal bioterrorists; expert skepticism about existential biorisk; and challenges related to the inscrutability of future biorisk.

2. Public health response

One of the strongest lessons from the COVID-19 pandemic is that simple non- pharmaceutical interventions such as masking, social distancing, and travel restrictions, can be highly effective if governments have the will to enforce them. For example, a study in Nature of early COVID measures in mainland China found that non-pharmaceutical interventions reduced cases by a factor of 67 by the end of February 2020. Here are their estimates for the course of the outbreak in Wuhan (b) and Hubei (f) with and without intervention.

That is not to say that non-pharmaceutical interventions are a panacea. They work less well when they are not consistently enforced. And in the case of China, they ultimately failed to counteract the effects of insufficient vaccination among the elderly. But used properly, non-pharmaceutical interventions are a remarkably effective tool in slowing the spread of disease.

Another novel feature of the COVID-19 pandemic is that, for the first time, an active pandemic was brought to an end through real-time development and deployment of vaccines. Although vaccines took over a year to bring to market, serviceable vaccines were quite quick to develop, and a society facing existential catastrophe might well bring a serviceable vaccine to market far more quickly, on a scale of months or even weeks, and advances in medical technology might bring further improvements beyond this. This would be particularly true if societies were willing to loosen restrictions on human trials and regulatory approval, which a society facing existential catastrophe might well do.

It is hard to exaggerate how novel and exciting this development is. Historically, societies have been unable to develop novel pharmaceutical responses to pandemic pathogens in time to affect the course of an initial pandemic. This would be deeply frightening in the face of potentially sophisticated engineered pathogens. But it is not just our ability to engineer pathogens that is advancing. We are also increasingly gaining the ability to develop effective vaccines to counter novel pathogens.

Would-be omnicidal bioterrorists need to develop a pathogen that is not only in principle able to destroy all living humans, but which will succeed in practice despite our best efforts to stop it. Humans are not sitting ducks, and we are unlikely to take a world-ending pandemic complacently.

3. Motivations

Here is a deceptively simple question. Suppose that it were to become technologically feasible for sophisticated groups of bioterrorists to cause extinction-level biothreats. Who would do such a thing?

It may seem obvious that there are plenty of sophisticated bioterrorists clamoring for the chance to eradicate humanity. But in fact, history holds few if any examples of sophisticated bioterrorists bent on destroying humanity.

We saw in Part 2 of the series on epistemics that when pressed to produce an example of a group of sophisticated, omnicidal bioterrorists, effective altruists almost inevitably appeal to Aum Shinrikyo. For example, here is Toby Ord in The Precipice:

People with the motivation to wreak global destruction are mercifully rare. But they exist. Perhaps the best example is the Aum Shinrikyo cult in Japan, active between 1984 and 1995, which sought to bring about the destruction of humanity. They attracted several thousand members, including people with advanced skills in chemistry and biology. And they demonstrated that it was not mere misanthropic ideation. They launched multiple lethal attacks using VX gas and sarin gas, killing 22 people and injuring thousands. They attempted to weaponise anthrax, but did not succeed. What happens when the circle of people able to create a global pandemic becomes wide enough to include members of such a group? Or members of a terrorist organisation or rogue state that could try to build an omnicidal weapon for the purposes of extortion or deterrence?

Only, we saw that this is simply not true. Aum Shinrikyo was not omnicidal. Aum Shinrikyo aimed to produce an event that they themselves would survive. From there, they aimed to rebuild Japan and later the entire world. They prepared in detail for this eventuality, including tasking their qualified medical staff with drawing plans for survival, as well as outlining a strategy for the creation of `lotus villages’ that would survive an attack.

Could history hold another example of sophisticated bioterrorists with omnicidal intentions? Experts are skeptical. Kathleen Vogel and Sonia Ben Ouagrham-Gormley explain in a study of emerging biosecurity threats that:

Since the 1980s, there have been a variety of actors “crying wolf” about how states and terrorists will adopt and use genetic engineering techniques for harm. Yet we have little empirical data over the past 30 years that show a specific state or terrorist group using any of these new biotechnological innovations to create biological weapons.

That skepticism is echoed elsewhere. A former bioweapons inspector and diplomat interviewed for BioSocieties explains:

You’ve got to start thinking about for whom this would be a valid weapon of choice. I think for the small-scale attack it would still be of interest to terrorists because it would be a small-scale attack with big impact. But if you start talking about the stuff that can cause huge damage – live, highly transmissible, highly pathogenic organisms – there’s very few actors I can think of who would want to use that. I think you have to come up with hypothetical actors rather than real-world actors.

Importantly, this expert does not deny the likelihood of actors being motivated to carry out small-scale attacks. But he is highly skeptical that many actors would want to destroy all of humanity.

Insofar as effective altruists struggle to produce a single, credible historical example of a sophisticated omnicidal bioterrorist group, and insofar as experts describe such actors as “hypothetical actors rather than real-world actors”, there are serious questions to be raised about the assumption that sophisticated actors in this century will be motivated to use biological weapons to bring about an existential catastrophe.

4. Expert skepticism

A continual theme in discussions of existential risk is that leading experts are highly skeptical of the existential risk estimates put forward by effective altruists. I hope to discuss the role of attitudes towards expertise in my series on epistemics. Experts need to be taken seriously: when the experts think your views are highly implausible, that’s usually because the views are highly implausible. This is doubly true when the banner of infohazards is used to obviate the need for detailed public argument against expert consensus.

A group of health researchers from King’s College, London describe the statement that “terrorists want to pursue biological weapons for high consequence, mass casualty attacks” as a “myth”, writing:

While most leading biological disarmament and non-proliferation experts believe that the risk of a small-scale bioterrorism attack is very real and very present, they consider the risk of sophisticated large-scale bioterrorism attacks to be very small.

The claim here is not merely that existential catastrophe is unlikely to result from bioterrorism. The study’s authors rather claim that most experts consider any sophisticated large-scale bioterrorist attack to be unlikely.

One of the authors of that study, Filippa Lentzos, followed up by convening an expert panel on risks posed by bioweapons. The experts unanimously echoed the view that high-casualty attacks are extremely unlikely, and that even mainstream political actors tend to greatly exaggerate the risks of such attacks. Speaking for the consensus, she put it to them that:

You all seem to agree that what we should be most worried about is a small-scale bioterrorism attack – that the likelihood of that is much higher – where we’re talking about deaths in the tens and not in the thousands. Your main concern, then, is not the same as the Obama administration’s: major biological weapons attacks on the world’s major cities that cause “as much death and economic and psychological damage as a nuclear attack”.

A former bioweapons inspector and microbiologist responded straightaway: “No, definitely not.”

In another paper, David Sarpong and colleagues lament “Sensational and alarmist headlines about DiY science” which “argue that the practice could serve as a context for inducing rogue science which could potentially lead to a ‘zombie apocalypse’.”

Experts widely believe that existential biorisk in this century is quite low. The rest of us could do worse than to follow their example.

5. Inscrutability of future biorisks

In this series, I have discussed a pattern that I call a regression to the inscrutable.

Effective altruists began by discussing risks such as asteroids, nuclear war and super-volcanoes that are to some extent tractable using traditional empirical methods. Careful research revealed that most of these risks were not very high.

As a result, research shifted to a range of increasingly esoteric and inscrutable threats. The frustration for critics of effective altruism is that it is agreed on all sides that these challenges are highly inaccessible to traditional, or even slightly nontraditional forms of empirical research. As we regress further into the realm of inscrutable threats, there is increasingly little that can be said using traditional methods. And critics of effective altruism, who are often skeptical of more speculative methods, think that should be the end of the matter.

So far, we have seen a regression to the inscrutable in at least two places:

  • Climate risk: Effective altruists place substantial emphasis on the very least scrutable climate risks, namely moist and runaway greenhouse effects. (Sub-series: Climate risk).
  • AI risk: Effective altruists place substantial emphasis on concerns about existential catastrophe from future artificial systems. Developments of this scale in AI are, by nature, very difficult to get a grip on using traditional methods. (Sub-series: AI risk).

This regression to the inscrutable contrasts with effective altruists’ early emphasis on scrutability, but also makes it difficult to press the case for inscrutable risks in a way that will be convincing to a wide audience.

Existential biorisk is relatively inscrutable in at least two ways. First, it relies on speculative developments in future technology for engineering and distributing pathogens. Everyone agrees that existential biorisk is not high now, because we do not currently have the technology to design and distribute a pathogen that could lead to existential catastrophe. However, we are asked to believe that biorisk will be much higher in the future, due to developments in genetic engineering and other technologies. It is hard to get a handle on what these developments might be, since they are by nature speculations about future technologies.

A second reason why existential biorisk is relatively inscrutable is that effective altruists are often hesitant to share arguments for existential biorisk due to concern about infohazards. This means that even if, in principle, it were possible to scrutinize claims about existential biorisk, it will be very hard in practice for those asked to believe these claims to subject them to rigorous scrutiny.

Concerns about scrutability underlie many experts’ resistance to existential biorisk claims, because there is an abundant record of far-future technological predictions landing far from the mark. Let’s return to the study of emerging biosecurity threads by Kathleen Vogel and Sonia Ben Ouagrham-Gormley cited earlier in this post. What is driving Vogel and Ouagrham-Gormley’s skepticism? In large part, they are worried about a track record of mistaken predictions about far-future technology. Vogel and Ouagrham-Gormley illustrate their skepticism by considering previous speculation about the future use of nuclear energy:

In a 1955 issue of Ladies’ Home Journal, it was touted that in the near future, nuclear energy would create a world “in which there is no disease … where hunger is unknown … where food never rots and crops never spoil … and routine household tasks are just a matter of pushing a few buttons … a world where no one stokes a furnace or curses the smog.” “Imagine,” the article continued, “the world of the future … the world that nuclear energy can create for us.” As historian of technology Stephen Del Sesto writes, these thoughts were not those of an overzealous journalist but of Harold E Stassen, President Dwight D Eisenhower’s special assistant on disarmament. Other prominent experts believed that “atomic batteries would power automobiles, washing machines, and even tiny wrist-watch radios.” At that time, optimism about nuclear energy was shared by many Americans after decades of technological enthusiasm inculcated by a variety of American popular culture and press accounts and scientific, academic, and government pronouncements — these technological dreams about nuclear energy captured the U.S. popular imagination for nearly 30 years. Del Sesto writes, “these forces reinforced one another making the dreams appear more plausible and closer at hand than they really were.” What was left out of these utopian imaginaries was any consideration of how a variety of social, economic, organizational, and political realities might shape the development, adoption, and use of nuclear energy that would hinder these dreams from becoming reality. It is quite easy to dream up a large number of fanciful imaginaries for any given technological innovation — the critical question, then, is how any of these imaginaries connect with on-the-ground reality as a technology develops?

This post began with a passage from work by health researchers at King’s College, London, who as we saw urged caution in the interpretation of speculative reasoning about future technologies:

[We do not] say that speculative thinking should be discounted … however, problems arise when these speculative scenarios for the future are distorted and portrayed as scientific reality.

It is very easy for speculation about relatively inscrutable future technological developments to become highly optimistic and utopian, and equally easy for this speculation to become highly pessimistic and dystopian. Like Vogel and Ouagrham-Gormley, and like Jefferson and colleagues, I hope that reflection on past instances of overly optimistic and pessimistic speculation about inscrutable future technological developments may encourage moderation in our predictions about future developments in biology and related areas.

6. Taking stock

Today’s post discussed four additional reasons for skepticism about high estimates of existential biorisk in this century.

First, biological threats can be met with effective public health responses. These may be as simple as the now-familiar package of non-pharmaceutical interventions such as masking, social distancing and travel restrictions. But they can also be as complex as the increasingly rapid and sophisticated techniques for bringing vaccines to market in time to counteract a novel pandemic.

Second, it is hard to find credible examples of sophisticated actors with both the capability and the motivation to carry out omnicidal attacks. Many groups wish to cause harm to some fraction of humanity, but far fewer groups wish to kill all of us.

Third, experts are generally skeptical of existential biorisk claims. Experts do not doubt the possibility or seriousness of small-scale biological attacks, and indeed such attacks have occurred in the past. But experts are largely unconvinced that there is a serious risk of large-scale biological attacks, particularly on a scale that could lead to existential catastrophe.

Fourth, the biorisks being appealed to are relatively inscrutable in two ways: they involve speculation about far-off future technologies, and much of this speculation is hidden from public view. This makes it difficult to analyze the case for existential biorisk, and also fits an emerging pattern of appeal to inscrutable risks. It is as though the weightiest risks always seem to lie just near enough in our temporal and technological future to make them intelligible, but not so near that they could be easily refuted by standard forms of analysis, scientific or otherwise.

As always, these are not decisive refutations of the possibility of existential biorisk. They are rather preliminary reasons to be skeptical of high estimates of existential biorisk. Together, these and other reasons for preliminary skepticism will combine to raise the evidential bar for establishing high levels of existential biorisk. Future posts will argue that this bar has not been cleared.

Comments

10 responses to “Exaggerating the risks (Part 10: Biorisk: More grounds for doubt)”

  1. Vasco Grilo Avatar
    Vasco Grilo

    Nice post, David!

    “A continual theme in discussions of existential risk is that leading experts are highly skeptical of the existential risk estimates put forward by effective altruists. I hope to discuss the role of attitudes towards expertise in my series on epistemics. Experts need to be taken seriously: when the experts think your views are highly implausible, that’s usually because the views are highly implausible. This is doubly true when the banner of infohazards is used to obviate the need for detailed public argument against expert consensus.”

    I just wanted to note the above arguably does not apply to AI existential risk (https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/):
    “The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. This is the same as it was in 2016 (though Zhang et al 2022 found 2% in a similar but non-identical question). Many respondents were substantially more concerned: 48% of respondents gave at least 10% chance of an extremely bad outcome. But some much less concerned: 25% put it at 0%.”

    The timeline referring to “long-run effect” is unclear, so from the above it may still be the case that AI risk is not immediately pressing.

    The XPT tournament asked about extinction risk before 2100, and experts thought it to 3 % for AI, and 1 % for engineered pathogens (https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/64abffe3f024747dd0e38d71/1688993798938/XPT.pdf). For reference, here is how experts were recruited:
    “To recruit experts, we contacted organizations working on existential risk, relevant
    academic departments, and research labs at major universities and within companies
    operating in these spaces. We also advertised broadly, reaching participants with
    relevant experience via blogs and Twitter. We received hundreds of expressions of
    interest in participating in the tournament, and we screened these respondents for
    expertise, offering slots to respondents with the most expertise after a review of their
    backgrounds.15 We selected 80 experts to participate in the tournament. Our final
    expert sample (N=80) included 32 AI experts, 15 “general” experts studying longrun risks to humanity, 12 biorisk experts, 12 nuclear experts, and 9 climate experts,
    categorized by the same independent analysts who selected participants. Our expert
    sample included well-published AI researchers from top-ranked industrial and
    academic research labs, graduate students with backgrounds in synthetic biology, and
    generalist existential risk researchers working at think tanks, among others. According
    to a self-reported survey, 44% of experts spent more than 200 hours working
    directly on causes related to existential risk in the previous year, compared to 11% of
    superforecasters. The sample drew heavily from the Effective Altruism (EA) community:
    about 42% of experts and 9% of superforecasters reported that they had attended an
    EA meetup. In this report, we separately present forecasts from domain experts and
    non-domain experts on each question.”

    If 42 % of experts attended an EA meetup, the group of experts is clearly not representative of the views of the respective fields as a whole.

    1. David Thorstad Avatar

      Thanks Vasco!

      I’m sorry for the very late response. I’ve been overwhelmed with work since I started my new job, and I’m really struggling to keep up with all of my responsibilities.

      You’re exactly right to draw attention to the composition of the expert sample used in the XPT. It is unlikely that 42% of biosecurity experts have attended an EA meetup, so the inclusion of so many EA-adjacent experts suggests a sample bias that may lead the experts in question to overstate existential risks.

      I’m hesitant to criticize this study too heavily, since it was led by an eminent scientist (Philip Tetlock) who has done pathbreaking worth in forecasting. At the same time, I share your disappointment with this and several other features of the XPT, and I really would like to get a more detailed description of the credentials and affiliations of the experts surveyed in this study.

      You’re also right to draw attention to the fact that many groups express nontrivial concern over existential risk from artificial intelligence. On some descriptions, this includes relevant experts. I’m not always sure what to say about that. I think the best thing to say is probably that the notion of expertise on AI safety is deeply contested, and it’s not clear to me that there is such a thing as an expert in the subject.

      When we look at established fields such as biosecurity, there are clear markers of expertise complete with long records of publication, prediction, research training, and policymaking on relevantly similar matters. By contrast, fields such as what effective altruists would call AI safety are very new, and most of those who claim expertise in the field lack similar records of field-specific publication, prediction, training, and policymaking experience.

      Moreover, it’s also possible that predicting developments in advanced artificial intelligence is a very difficult thing to do. If that is right, then we might not want to assign too much weight to even the best-credentialed forecasters, simply because the difficulty of the problem may overwhelm any benefits provided by their credentials.

      Together, the lack of clear expertise combined with the difficulty of predicting AI risk should probably lead us to be cautious about the results of expert opinion polls on AI risk. I think that caution might be strengthened when we recall that opinions are quite new and emotional on these issues, and also that effective altruists have had a sizeable role in shaping opinions within the field.

      At the same time, it is important to note that opinion polls on AI risk often given estimates that are higher than I would like them to be. I think those estimates are too high, so I find myself arguing that many people are wrong. That’s not a position I would like to be in. However, we had a saying at Harvard: `the shoe always pinches somewhere’. It’s best to be honest about where the shoe pinches for your position, and this is one of the places where the shoe pinches. I hope it doesn’t pinch too much!

  2. JOHN GRAHAM HALSTEAD Avatar
    JOHN GRAHAM HALSTEAD

    Fwiw, I’m not sure some of those expert claims are in tune with the realities of the declining costs of gene synthesis, which EA bio is especially concerned about. It cost $30k in 2020 to create covid from scratch, and it took about three weeks for skilled virologists. The cost is lower for flu viruses. Over the last 20 years, the cost of gene synthesis has fallen by a factor of 1000. So, on current trends, in the next 10 years, one might expect it to cost (much) less than $1000 to create viral weapons of mass destruction that can do similar damage to nuclear weapons, and for this capacity to be accessible to at least thousands of people across the world. At present, gene synthesis is unregulated, and you can buy the required kit easily and easily circumvent safeguards.

    I think the dismissals of these sorts of risks from some people in biosecurity are fair now, but not about the emerging risks in the next 10-20 years, which in my view are disturbingly large.

    1. David Thorstad Avatar

      Thanks John!

      It’s certainly important to emphasize developments in synthetic biology as potential risk factors. As you mention, experts are acutely aware of these developments, but are typically much less concerned about them than effective altruists are.

      It is open to effective altruists to construct arguments linking those developments to near-term existential risks. Did you have a more detailed argument in mind?

      Some of what you suggest sounds like you may be concerned about catastrophic rather than existential risks. For example, you say that “in the next 10 years, one might expect it to cost (much) less than $1000 to create viral weapons of mass destruction that can do similar damage to nuclear weapons, and for this capacity to be accessible to at least thousands of people across the world.” For my own part, I am much more concerned about catastrophic biorisk than I am about existential biorisk. I’m not even sure that one must appeal to bioterrorism here. The world is ill-prepared for another pandemic, and quite likely to suffer catastrophe if one comes for any reason.

      1. David Mathers Avatar
        David Mathers

        You seem to be hinting that you are concerned about catastrophic risk but not X-risk, but also putting a lot of weight on expert opinion. But the experts your quoting seem to be dismissing catastrophic risk not just existential: ‘experts unanimously echoed the view that high-casualty attacks are extremely unlikely’. Surely you should either conclude that the experts opinion shouldn’t get *that* much weight because they are making errors of reasoning or that in fact, your own level of concern about catastrophic risk is too high. (Obviously you can do a mixture of both rather than all of one or the other.) The fact that you continue to be concerned about catastrophic risks suggests that you lean towards the “experts are wrong, or at least, only thinking about current risk level, not future” response. But on that view, you shouldn’t be telling other people to give all that much weight to the experts on whether X-risk is high either.

        Or did you just mean to convey that while you think catastrophic risk is highER than X-risk, you still think it is very low?

        (For the record, I think it is highly unlikely anyone will ever kill all humans with bioterrorism.)

        1. David Thorstad Avatar

          Thanks David!

          Everyone should be concerned about biological catastrophes. The world just lived through a catastrophic pandemic, and we aren’t exactly on track to prevent another. The importance of preparing for catastrophes such as pandemics, as well as the tendency of many actors to under-prepare for these catastrophes, are themes that effective altruists correctly stress.

          Arguments for existential biorisk (insofar as they are spelled out) tend to differ from the above in two ways. First, they are concerned with scenarios in which enough damage is done to threaten the survival of humanity or our potential for desirable future development. This is much more extreme than a scenario in which tens, or even hundreds of millions are killed in a pandemic. And second, these arguments are generally concerned either with intentional misuse of biological substances, or at least with laboratory leaks of extremely dangerous substances that nobody had any business synthesizing.

          The experts cited were expressing strong skepticism about the second of these assumptions. They don’t see much evidence that malicious actors will be in a position to inflict existential catastrophe on the world anytime soon. That does, of course, leave open the possibility of laboratory leaks leading to existential catastrophe. My impression is that most relevant experts think this is implausible enough that they have not seen the need to address it.

          All of this is perfectly compatible with concern about catastrophic biorisk, for two reasons. First, you can be concerned about relatively mundane sources of biorisk such as the COVID-19 pandemic without being concerned about sophisticated bioterrorists doing us in. These are almost entirely distinct threats, and there’s no reason why we cannot take one much more seriously than the other.

          Second, you might think that it’s a lot easier to kill many people (even millions) than to do enough damage to destroy humanity or permanently curtail our potential for desirable future development. Many of the concerns about existential biorisk raised so far in this series are directed to just this point. In Part 9, we saw that engineering biological events that lead to existential catastrophe is a very difficult problem; that biological events have rarely led to mammalian extinction; and that producing all of the needed mutations at once is so difficult that a well-funded (alleged) Soviet bioweapons program didn’t come close. In Part 10, we discussed the possibility of effective public health responses and a scarcity of motivated omnicidal bioterrorists.

          All of these considerations may be taken to tell against the likelihood of near-term existential catastrophe from biological means, even while retaining nontrivial confidence in the likelihood of catastrophic events. After all, it’s not nearly so difficult to engineer catastrophic pathogens; biological events have certainly led to catastrophic levels of death among mammals; producing a few nasty mutations isn’t as hard as producing all of them at once; public health responses might not prevent catastrophic outcomes; and a number of terrorists would very much like to harm particular groups of humans.

          1. David Mathers Avatar
            David Mathers

            I guess the main place I disagree with that is just that I don’t see how to read the passage I quoted in a way that makes it consistent with your relatively high openness to the possibility of a deliberately caused catastrophic pandemic. I actually agree that X-risk here is low.

            1. David Thorstad Avatar

              Thanks David!

              I’m not especially open to the possibility of a near-term, deliberately caused catastrophic pandemic.

      2. John Halstead Avatar
        John Halstead

        Thanks for the reply David.

        Existential risks are broader than just extinction. In a world in which it is easy for thousands of people to create weapons of mass destruction, it’s difficult to see how we could maintain civilisation for the long-term, even if we would not go extinct. We would be in a vulnerable world in which the only way to stop recurrent catastrophe would be cessations of trade and travel, global governance, surveillance and preventive policing. This would increase the risk of global totalitarianism, which itself an existential risk.

        1. David Thorstad Avatar

          Thanks John! It’s certainly true that existential risk goes beyond extinction. Effective altruists are welcome to provide detailed arguments for other risks that might result from biological hazards.

          For example, you are certainly correct that policies for combatting the threat of bioterrorism tend to be frighteningly right-wing: this is, as I have stressed elsewhere, one of the costs that might be incurred by efforts to drive down biological risks by a very large fraction of their previous value. It is certainly possible that such policies could be friendly to totalitarians and other right-wing forms of government.

          When effective altruists talk about totalitarianism as an existential risk, they are concerned with the possibility of a permanently stable totalitarian government that seizes control over the better part of humanity and never lets go. That is quite a strong scenario to envision. Were there any arguments that make you think such a scenario is especially likely? Detail is important here.

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading