Let us say that AI is artificial intelligence of human level or greater …. [and] AI+ is artificial intelligence of greater than human level … and AI++ is AI of far greater than human level … Then we can put the argument for an intelligence explosion as follows. 1. There will be AI+. 2. If there is AI+, there will be AI++. 3. Therefore, there will be AI++.
David Chalmers, “The singularity: A philosophical analysis“
1. Some paper updates
The paper on which this series is based, “Against the singularity hypothesis” is now forthcoming in a special issue of Philosophical Studies on AI Safety. I’ll share a link to the published version as soon as it is out.
In the meantime, I am starting to put together some resources to make the paper more accessible. If you’d rather hear me talk about the paper, here is a talk that I gave about the paper at the MINT Lab at ANU (and here is the handout to accompany the talk).
I will try to share more resources as they become available. If you would like to laugh at my design abilities, here is an infographic summarizing the preliminary case for skepticism about the singularity hypothesis. Readers will have a further opportunity to mock my design abilities in Section 5 of this post.
2. Introduction
This is Part 4 of a series based on my paper “Against the singularity hypothesis“.
The singularity hypothesis begins with the assumption that artificial agents will gain the ability to improve their own intelligence. From there, the singularity hypothesis holds that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion: an event in which artificial agents rapidly become orders of magnitude more intelligent than their human creators.
Part 1 introduced and clarified the singularity hypothesis. Part 2 and Part 3 gave five preliminary reasons to doubt the singularity hypothesis. Together, these five reasons for doubt place a strong burden on defenders of the singularity hypothesis to provide significant evidence in favor of their view. The next task is to argue that this burden has not been met.
I take up this task by looking at the two leading philosophical arguments for the singularity hypothesis: David Chalmers’, “The singularity, a philosophical assessment” (today) and Nick Bostrom’s Superintelligence (next time). I argue that these authors do not provide enough evidence to overcome the case for skepticism.
3. The singularity: A philosophical analysis
David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at NYU. Chalmers is also Distinguished Honorary Professor of Philosophy at the Australian National University and co-director of the PhilPapers Foundation, one of the best resources for philosophy since the invention of libraries.
Chalmers’ paper, “The singularity, a philosophical assessment“, is widely regarded as among the most rigorous philosophical presentations of the singularity hypothesis, and rightly so. I’ll focus today on the first half of the paper, which argues for the singularity hypothesis. The second half of the paper examines some implications of the singularity hypothesis, including existential risk, AI safety, and mind uploading. It breaks my heart to ignore the second half of the paper, particularly the discussion of uploading, which is excellent. But one must stay focused, and our aim today is to figure out what can be said in favor of the singularity hypothesis.
Chalmers’ argument begins with three definitions. These definitions are phrased in terms of the intelligence of artificial agents, although Chalmers helpfully suggests a formal strategy for working around this assumption if needed. (I won’t discuss that strategy here, but it is covered in Section 3 of Chalmers’ paper).
Let AI be an artificial system at least as intelligent as an average human. Let AI+ be an artificial system more intelligent than the most intelligent human, and let AI++ (superintelligence) be AI far beyond human level. (Chalmers suggests we might take AI++ to be “at least as far beyond the most intelligent human as the most intelligent human is beyond a mouse.”)
With these definitions in place, Chalmers’ basic argument is the following:
- Equivalence premise: There will be AI (before long, absent defeaters).
- Extension premise: If there is AI, then there will be AI+ (soon after; absent defeaters).
- Expansion premise: If there is AI+, then there will be AI++ (soon after, absent defeaters).
It follows that, absent defeaters, there will soon be AI++, which is (roughly) what the singularity hypothesis claims.
4. The proportionality thesis
There is much to discuss here. But I want to focus on the expansion premise. After all, my strategy is to question the singularity hypothesis’ ambitious growth claims, and it is the expansion premise that encodes the most ambitious growth claims.
Chalmers argues for the expansion premise on the basis of the proportionality thesis: “increases in intelligence … always lead to proportionate increases in the capacity to design intelligent systems.” For example, if a machine becomes 10% more intelligent, then it can design systems at least 10% more intelligent than those it would previously have been able to design.
In the next section, I’ll question the truth of the proportionality thesis. But first: what is the connection between the proportionality thesis and the singularity hypothesis? It turns out that the proportionality thesis delivers at most a watered-down and highly unorthodox version of the singularity hypothesis.
We saw in Part 1 of this series that the singularity hypothesis claims that machine intelligence will grow at an accelerating rate. We also saw that it is important to be clear about what is meant by accelerating growth. We saw, for example, that on most understandings, exponential growth is not accelerating growth:

It is natural to regard exponential growth as constant growth. After all, the defining fact of the exponential is that it is its own derivative. Part 1 distinguished two ways to measure growth: relative growth (% change in intelligence / time) and absolute growth (absolute change in intelligence / time). We saw that exponentials show constant relative growth, though accelerating absolute growth.
We also saw that talk of accelerating absolute growth is uncommon and misleading. One way to see this is that exponentials may grow much more slowly than defenders of the singularity hypothesis want: for example, if intelligence grew exponentially at a rate of 2 percent per year, then it would take almost 350 years for intelligence to grow three orders of magnitude. For exactly this reason, sustained exponential growth in quantities such as GDP and population, while impressive, is not naturally compared to the discontinuity induced by crossing the event horizon of a black hole. Another way to see this is that even functions which show diminishing relative growth can show increasing absolute growth: if intelligence grows 10% one year and 9.5% the next year from a base of, say, 100 units, then the first year intelligence grows by 10 units and the next year intelligence grows by 10.45 units. That isn’t the kind of accelerating growth we likely had in mind.
On the most common (relative) understanding, accelerating growth is hyperbolic growth, which reaches an asymptote in finite time:

Hyperbolic growth is much, much faster than exponential growth. For example, in his book Superintelligence, Nick Bostrom considers a model in which intelligence doubles in 7.5 months, grows a thousandfold within 17.9 months, and approaches infinity in 18 months. We will discuss this model in the next post.
What’s the point? Simply this. Even if the proportionality thesis were true, it would ground exponential, rather than hyperbolic growth. Machine intelligence might grow by, say, 10%, leading to 10% growth in the intelligence of resulting systems, leading to 10% improvement in the ability to design intelligent systems, leading to a further 10% growth in the intelligence of resulting systems, and so on. Here we would have exponential growth at a 10% rate.
Just to be clear, nearly everyone except Chalmers understood the singularity hypothesis to involve hyperbolic growth. For example, Ray Solomonoff (1985), like Bostrom, gives an explicit hyperbolic model of the hypothesis. And just about all leading defenders of the singularity hypothesis have expressed the same view.
Let’s start with summaries by people who know what they are talking about. Here is Bostrom’s summary of the singularity hypothesis in the Transhumanist FAQ:
Some thinkers conjecture that there will be a point in the future when the rate of technological development becomes so rapid that the progress-curve becomes nearly vertical. Within a very brief time (months, days, or even just hours), the world might be transformed almost beyond recognition. This hypothetical point is referred to as the singularity. (Bostrom 2003)
Here is Ben Goertzel summarizing the singularity hypothesis for Artificial Intelligence:
The term “Singularity” was introduced in this context by Vernor Vinge, who used it to refer to the point at which scientific and technological progress occur so fast that, from the perspective of human cognition and perception, the rate of advancement is effectively infinite. (Goertzel 2007)
Now it is true that some authors sometimes utter the phrase `exponential growth’ in describing the singularity hypothesis. But they do not really mean exponential growth. For example, Ray Kurzweil thinks there is a more general law of accelerating returns of which the singularity hypothesis is a mere instance, in which a great many things show accelerating growth. And while Kurzweil initially says he is after exponential growth, it quickly becomes clear that he is not:
A serious assessment of the history of technology shows that technological change is exponential. In exponential growth, we find that a key measurement such as computational power is multiplied by a constant factor for each unit of time (e.g., doubling every year) rather than just being added to incrementally. Exponential growth is a feature of any evolutionary process, of which technology is a primary example. One can examine the data in different ways, on different time scales, and for a wide variety of technologies ranging from electronic to biological, and the acceleration of progress and growth applies. Indeed, we find not just simple exponential growth, but “double” exponential growth, meaning that the rate of exponential growth is itself growing exponentially. (Kurzweil 2004)
This passage is frustratingly typical of Kurzweil: we have a first (misleading) claim that growth is exponential, then a confusingly phrased restatement that we’re actually after an acceleration in the rate of exponential growth (i.e. hyperbolic growth, which Kurzweil calls “double” exponential growth). I’d be happy to provide other supportive passages for readers interested in Kurzweil exegesis, but I don’t enjoy picking through this kind of writing.
I could go on, but I think the point is clear. It is very important to make a single, precise philosophical claim and stick to it. This is important (1) because it’s impossible to argue against a moving target, and (2) because the implications of a claim vary when the claim is changed. Just about everyone understood the singularity hypothesis to involve hyperbolic growth. Chalmers isn’t arguing for hyperbolic growth. That means that Chalmers is (1) making the singularity hypothesis into a moving target, and (2) jeopardizing, or at least requiring new arguments for the implications that were supposed to follow from the singularity hypothesis.
In the next section, I will argue against the proportionality thesis. But the main thing to say about the proportionality thesis is that, strong as it is, it’s still not strong enough to get the job done.
5. Arguments for proportionality
How does Chalmers argue for the proportionality thesis “that increases in intelligence … always lead to proportionate increases in the capacity to design intelligent systems”? Surprisingly, he doesn’t. Chalmers states the thesis, then shows how it can be formalized and used to derive the singularity hypothesis given other assumptions. But Chalmers doesn’t provide any argument for the thesis until he arrives at what he calls an objection, that readers might not believe the proportionality thesis.
At this point, that is a fairly good objection. We haven’t been given any reason to believe the proportionality thesis. And the proportionality thesis is a very strong claim. For example, in the vocabulary of our discussion of research productivity in Part 2, the proportionality thesis posits constant research productivity with intelligence regarded both as research input (of designing systems) and research output (of designed systems). We saw in Part 2 that there are good reasons to expected diminishing, rather than constant research productivity (and certainly not the increasing research productivity needed for hyperbolic growth). We also met four other reasons to worry about ambitious growth claims.
How does Chalmers reply to the objection that readers might not believe the proportionality thesis? Here is the entirety of Chalmers’ reply which, as far as I can tell, is the only positive consideration advanced in favor of the proportionality thesis in Chalmers’ paper:
If anything, 10% increases in intelligence-related capacities are likely to lead to all sorts of intellectual breakthroughs, leading to next-generation increases in intelligence that are significantly greater than 10%. Even among humans, relatively small differences in design capacities (say, the difference between Turing and an average human) seem to lead to large differences in the systems that are designed (say, the difference between a computer and nothing of importance).
The first sentence is a strengthened restatement of the proportionality thesis, not an argument. This puts the burden of this passage on the second (bolded sentence).
The second sentence makes roughly the following argument, based on a single observation about Alan Turing. Early in the history of computing, small increases in the design capacities of agents (say, the difference between Turing’s design capacities and my own) led to large increases in the intelligence of designed systems (say, the difference between a Turing computer and nothing much). That is, proportionality held in Turing’s day. But if proportionality held in Turing’s day, we should expect proportionality to hold for a long time beyond that, at least enough to take us to AI++. Call this the observational argument.
There are three problems with the observational argument. First, and most importantly, it is far too short. The singularity hypothesis is an enormous and ambitious claim. We saw in Part 2 and Part 3 of this series that there are at least five good reasons to doubt the singularity hypothesis. We concluded that defenders of the singularity hypothesis need to make a very strong case for the hypothesis in order to overcome the case for skepticism. What we are owed is a research program. What we have been given is a single sentence. That isn’t taking seriously the argumentative burdens facing defenders of the singularity hypothesis, or attempting to shoulder them.
It should not be surprising that a one-sentence argument has gaps. My second and third objections to the observational argument reveal some of the largest gaps. The second objection is locality. The observational argument is local: it tells us about a single point on the growth curve of machine intelligence. But we want to learn about the entire curve. Telling us about the slope of the curve at a single point doesn’t tell us much about the slope of the curve elsewhere.
Moreover, the local point that Chalmers samples has been chosen to be maximally favorable to the singularity hypothesis. After all, Turing lies very close to the beginning of the curve. But opponents of the singularity hypothesis don’t doubt that intelligence ever showed accelerating growth. They doubt instead that intelligence will continue to show accelerating growth over many orders of magnitude. Indeed, almost all of the arguments given in Part 2 and Part 3 of this series are reasons to suspect that growth will eventually slow. Research productivity diminishes eventually. Bottlenecks and resource limitations set in eventually. For this reason, if we are to consider local arguments, we should not be concerned with Chalmers’ local argument, which samples points too early in the curve to settle what is at issue.
The third objection is equivocation. The proportionality thesis needs sameness of inputs and outputs. We might say that 10% increases in intelligence lead to 10% increases in the intelligence of resulting systems. Alternatively, we might say that 10% increases in design ability lead to 10% increases in the design ability of resulting systems. But we had better not say that 10% increases in design ability lead to 10% increases in intelligence.
Why not? Well, if we have 10% more intelligence than before, and we know that 10% increases in design ability lead to 10% increases in intelligence, then we can conclude … well, nothing. We are stuck at the second step of recursive self-improvement.
But Chalmers states the observational argument equivocally. Chalmers says that Turing had just a bit more design ability than his contemporaries, as a result of which Turing was much more able to design intelligent systems.
You might think this is fixable. Perhaps the datum is that Turing’s superior intelligence led to much greater ability to design intelligent systems? Only, that isn’t clear from the example. Turing was hardly the most intelligent chap around. Are we to conclude that anyone more intelligent than Turing was even more able to design intelligent systems?
Or perhaps the datum is that Turing’s superior design abilities led to much greater design abilities in the resulting systems. But that is doubly unclear from the example. First, it’s not clear that Turing machines had any design abilities at all. And second, you might think that Turing’s success shows that he had more than a small edge over his contemporaries in design abilities.
Summing up, Chalmers doesn’t give much in the way of argument for the proportionality thesis. He does make one observational argument drawing on a datum about Turing. But this argument is too skimpy to meet the argumentative burdens facing defenders of the singularity hypothesis; is local rather than global, and samples a particularly unhelpful local point; and equivocates between intelligence and design ability.
6. Looking forward
In this post, we discussed Chalmers’ argument for the singularity hypothesis. We saw in Section 3 that the key premise, the expansion premise, encodes the most ambitious growth assumptions. We saw in Section 4 that Chalmers argues for the expansion premise on the basis of the proportionality thesis, that increases in intelligence always lead to proportional increases in the ability to design intelligent systems. We also saw in Section 4 that the proportionality thesis isn’t really what Chalmers needs: the singularity hypothesis has been almost universally understood to posit hyperbolic intelligence growth, whereas the proportionality thesis would ground only exponential intelligence growth.
Even so, the proportionality thesis is a strong claim, and it is worth asking what can be said in favor of it. We saw that Chalmers doesn’t give an extended argument for the proportionality thesis. The longest identifiable argument, the observational argument, rests on a single sentence about Turing. And we saw in Section 5 that the observational argument does not work.
I think we should conclude on this basis that Chalmers has not overcome the case for skepticism about the singularity hypothesis.
Perhaps others have done better? The next post in this series will look at Nick Bostrom’s arguments for the singularity hypothesis in Superintelligence. We will see that Bostrom succeeds in providing relatively more detailed and empirically grounded arguments than Chalmers does. But we will see that Bostrom’s arguments fall far short of meeting the burden placed upon them.

Leave a Reply