Against the singularity hypothesis (Part 1: Introduction)

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.

I.J. Good, “Speculations concerning the first ultraintelligent machine
Listen to this post

1. Introduction

This is Part 1 of a series based on my paper “Against the singularity hypothesis“.

The singularity hypothesis begins with the assumption that artificial agents will gain the ability to improve their own intelligence. From there, the singularity hypothesis holds that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion: an event in which artificial agents rapidly become orders of magnitude more intelligent than their human creators.

Beyond this, the singularity hypothesis holds, lies a singularity: a discontinuity in human history, analogous to passing beyond the event horizon of a black hole. From this point onwards, humans will no longer be in control of our own destiny. Rather, our fate rests in the hands of superintelligent artificial agents. If they are benevolent towards humans, we may experience long and happy lives. If they are not so benevolent, we may go the way of the dinosaurs.

In this paper and blog series, I do four things. First, I clarify the contents of the singularity hypothesis. Second, I give preliminary reasons to doubt the truth of the singularity hypothesis. Third, I examine recent defenses of the singularity hypothesis and argue that they are insufficient to overcome the case against the singularity hypothesis. Fourth, I draw implications from this discussion.

Today’s post takes up the first project: clarifying the singularity hypothesis.

2. Three key claims

The singularity hypothesis has passed through many hands, and in the process the term has been used in different ways by different authors. This creates a good deal of definitional ambiguity that is anathema to serious academic discussion.

For the purposes of this paper, we will have to settle on a single definition of the singularity hypothesis and make that definition as precise as we can. The editors of the first scholarly anthology on the singularity hypothesis (Eden et al. 2012) hold that a singularity hypothesis has three components:

First, it specifies a quantity, such as the general intelligence of artificial agents.

Second, it claims that the quantity will experience a sustained period of accelerating growth.

Third, it claims that accelerating growth will lead to a fundamental discontinuity in human history.

Specifying the relevant quantity; growth assumptions; and envisioned discontinuity will help us to zero in on the version of the singularity hypothesis that interests me. I have tried to do this in a way that stays close to many classic discussions. Because the term `singularity hypothesis’ has been used in different ways by different authors, not all views will be perfectly captured by the discussion below. This is understandably frustrating, but I will not take the blame for this situation. I have little love for ambiguous terms. In this case, I did not cause the existing definitional ambiguity, and I take my duties to be exhausted by resolving the pre-existing ambiguity in a reasonably faithful way.

3. Which quantity?

The quantity that interests me is the general intelligence of artificial agents. This is the quantity at issue in many classic discussions, such as the seminal statement of the singularity hypothesis by I.J. Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.

What do I mean by the general intelligence of artificial agents? Well, it’s not up to me to clarify the term. I will assume that we know already what is meant by `general intelligence’, since any ambiguity in the classic statement of the singularity hypothesis could only help me.

I have found that while many readers are comfortable speaking of the general intelligence of artificial agents, a significant minority of readers express doubts about this notion. Some doubt the cogency of the notion: what exactly do we mean when we speak about general intelligence? Others doubt the measurability of this notion. For example, they may think that it only makes sense to speak of general intelligence on an ordinal (rank-ordered) scale, rather than a cardinal (numerical) scale. A third group thinks that general intelligence is both cogent and cardinally measurable, but doubts that machines could ever possess meaningful levels of general intelligence.

I have to confess that I am not much moved by these doubts. The notion of general intelligence has received at least adequate scientific scrutiny in the psychological study of intelligence, which is by now a well-established subfield of psychology and cognitive science. As for cardinalization, I haven’t seen any especially pressing reasons to doubt that intelligence is cardinally measurable. And I don’t have much sympathy for views on which machines could not, in principle, possess meaningful levels of general intelligence. We might, for example, literally build a human brain neuron-by-neuron out of nonbiological material or as a digital simulation (whole brain emulation). There are, of course, doubts to be raised about such approaches, but I’m not ready to rule out such approaches from the armchair.

Furthermore, Dave Chalmers gives a helpful strategy for reformulating the singularity hypothesis to avoid the assumption that intelligence is cardinalizable. Roughly (this is a bit too rough), the strategy is to find some other cardinal quantity, such as a standard psychological measure of reasoning ability, perhaps enriched a bit so that artificial agents don’t break the scale. We then reformulate the singularity hypothesis as the claim that this surrogate quantity will grow at a healthy clip. So long as increases in that surrogate quantity tend, not too slowly, to lead to increases in intelligence, a near-analog of the singularity hypothesis will follow.

All of this is to say that with one exception, I won’t push on the notion of general intelligence. The only place where I will fuss about the notion of general intelligence is when I argue against equating rates of hardware growth with rates of general intelligence growth. For example, from the fact that transistor counts grew 33 millionfold from 1972 to 2022, I hold that we cannot immediately conclude that the intelligence. of artificial agents grew 33 millionfold over the same period. A surprisingly large number of people have wanted to argue with me about this point. In such arguments, I will become confused about what general intelligence is meant to mean, and how it is meant to play the role that advocates of the singularity hypothesis want it to play.

4. Accelerating growth

The singularity hypothesis claims that the general intelligence of artificial agents will experience a sustained period of accelerating growth. Let’s think a bit about what this means.

The assumption of accelerating growth rules out familiar rates of moderate growth such as logarithmic growth

This image has an empty alt attribute; its file name is Screenshot-2023-12-12-at-6.39.51-PM.png

and linear growth.

What about exponential growth?

Typically, we speak about relative growth: the growth of intelligence as a percentage of its current value. In relative terms, exponential growth represents constant rather than accelerating growth: it is, after all, a famous property of exponential functions that they are their own derivatives.

More rarely, we speak about absolute growth: the unscaled growth of intelligence, in units of intelligence alone. In these terms, exponential growth does represent accelerating growth, though it needn’t represent particularly fast growth. For example, if intelligence grew exponentially at a rate of 2 percent per year, then it would take almost 350 years for intelligence to grow three orders of magnitude.

For this reason, most authors have taken the singularity hypothesis to involve hyperbolic growth, in which the relative growth rate is itself accelerating.

It is hard to convey just how strong an assumption hyperbolic growth is. For example, in his book Superintelligence, Nick Bostrom considers a model in which intelligence doubles in 7.5 months, grows a thousandfold within 17.9 months, and approaches infinity in 18 months.

Of course, no authors posit indefinite hyperbolic growth, since intelligence is unlikely to literally converge to infinity. However, many authors follow Bostrom in positing a sustained period of hyperbolic growth. For example, Ray Kurzweil, one of the most prominent advocates of the singularity hypothesis, writes in his book The singularity is near that:

The law of accelerating returns [hyperbolic growth] will continue until nonbiological intelligence comes close to “saturating” the matter and energy in our vicinity of the universe with our human-machine intelligence. By saturating, I mean utilizing the matter and energy patterns for computation to an optimal degree, based on our understanding of the physics of computation’.

This passage reveals the importance of the assumption that accelerating growth will be sustained over many iterations of intelligence growth. This assumption is important, because it is not so rare to find quantities that grow quickly for a short time, such as the number of reactions in a nuclear chain reaction. What is rare is for accelerating growth to be sustained. Continuing our example, nuclear chain reactions quickly fizzle out after most of the available reactive material has been consumed.

To see the point in practice, consider population growth. The Industrial Revolution led to rapid growth in the size of human populations. Growth was so rapid that demographers debated, and still debate whether it was hyperbolic or `merely’ exponential.

However, population growth soon slowed. As we saw in my paper and blog series “Mistakes in the moral mathematics of existential risk”, most demographers think that population growth will reach zero before the end of the century, and will likely turn negative.

What were the results of this period of explosive population growth? About an order of magnitude was added, from 600 million in 1700 to 8 billion in 2023. Nothing to sneeze at, but hardly a singularity.

Defenders of the singularity hypothesis need to argue not only that the general intelligence of self-improving artificial agents will grow at an accelerating rate, but also that this period of accelerating growth will be sustained. Otherwise, intelligence growth will go the way of population growth or a nuclear chain reaction: impressive, but quickly over, and. in the end not quite so radical as it may have seemed.

5. Discontinuity

How long must this period of accelerating growth in the general intelligence of artificial agents be sustained? At least until we have reached a fundamental discontinuity in human history.

Most authors seem to assume that, at a minimum, accelerating growth will end after artificial agents have become orders of magnitude more intelligent than the average human. For example, in his seminal treatment of the singularity hypothesis, Ray Solomonoff writes:

At a continued expenditure of ten million dollars a year, it would take about 11 more years to get to the ‘infinity point’. Though infinity is a bit high, it seems very likely that we could achieve a growth factor of at least 100 in those 11 years.

Likewise, in one of the better-known scholarly defenses of the singularity hypothesis, Richard Loosemore and Ben Goertzel treat levels of intelligence 2-3 orders of magnitude beyond the human level as a conservative assumption:

We, like Good, are primarily interested in the explosion from human-level AGI to an AGI with, very loosely speaking, a level of general intelligence 2-3 orders of magnitude greater than the human level . . . That is not because we are necessarily skeptical of the explosion continuing beyond such a point, but rather because pursuing the notion beyond that seems a stretch of humanity’s current intellectual framework.

Some other authors defend even more dramatic discontinuity claims. We have already seen that Ray Kurzweil expects accelerating growth to continue until most of the universe has been tiled with Dyson spheres, or close cousins thereof. The other main popularizer of the singularity hypothesis, Vernor Vinge, envisions changes only somewhat less drastic than this:

From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in “a million years” (if ever) will likely happen in the next century. (In Bear (1983), Greg Bear paints a picture of the major changes happening in a matter of hours.) I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper).

Similarly dramatic claims are made by many effective altruists. For example, we saw. in Part 9 of my blog series “Existential risk pessimism and the time of perils” that the most common objection made. by effective altruists to my paper on the time of perils has it that AI will rapidly improve until it is so powerful that it can foresee and squash virtually all existential risks, despite relatively high levels of current risk and despite a view on which technological progress is the main driver of existential risk. The required developments in artificial general intelligence for this objection to succeed likely go far beyond an increase of 2-3 orders of magnitude beyond the average human.

6. Conclusion

This post introduced my paper Against the singularity hypothesis“.

We saw that a singularity hypothesis has three key components: it specifies a quantity, claims that the quantity will experience a sustained period of accelerating growth, and claims that accelerating growth will lead to a fundamental discontinuity in human history.

The version of the singularity hypothesis that interests me takes the relevant quantity to be the general intelligence of artificial agents. Accelerating growth is most naturally understood as superexponential (typically, hyperbolic), although on an atypical reading it also includes exponential growth. Accelerating growth must be sustained for a substantial period of time, until a discontinuity in which artificial intelligence becomes orders of magnitude more intelligent than the typical human.

These are strong claims, and they should be supported by strong evidence. In this series, we will see that existing evidence is rather more limited. For that reason, we’ll begin next time by considering some positive reasons for skepticism of the singularity hypothesis, in order to have more to talk about. Then we’ll see that some leading arguments for the singularity hypothesis fail to overcome the case for skepticism.

Comments

3 responses to “Against the singularity hypothesis (Part 1: Introduction)”

  1. Amadeo Avatar
    Amadeo

    I haven’t engaged with the arguments in detail, but I have never understood why designing a more intelligent entity might not become progressively more difficult with increasing intelligence, such that the growth rate may very well slow down and overall intelligence may asymptotically approach a certain level, even granting all the other assumptions.

    1. David Thorstad Avatar

      Thanks Amadeo!

      This is one of the five arguments I’ll make in objection to the singularity hypothesis, and in my view it is one of the strongest. Economists think that after a point, most research fields hit a point of diminishing returns, which in this case means that intelligence gains become progressively more difficult to produce. Applying that view to the case of artificial intelligence would yield the view that you suggest. There is very good evidence for diminishing returns in hardware R&D, so we should not be surprised to see the same in software.

  2. titotal23 Avatar
    titotal23

    Very much looking forward to this series! It’s shocking how often the singularity hypothesis is simply taken as a given, despite very flimsy evidence.

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading