Machine computation speeds have increased by a factor of about 1011 since World War II. By contrast, power plants have seen modest efficiency gains and face limited prospects given constraints like Carnot’s theorem. This distinction is important, because … output and growth end up being determined not by what we are good at, but by what is essential but hard to improve.
Aghion et al., “Artificial intelligence and economic growth“
Listen to this post
1. Recap
This is Part 3 of a series based on my paper “Against the singularity hypothesis“.
The singularity hypothesis begins with the assumption that artificial agents will gain the ability to improve their own intelligence. From there, the singularity hypothesis holds that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion: an event in which artificial agents rapidly become orders of magnitude more intelligent than their human creators.
Part 1 introduced and clarified the singularity hypothesis. Part 2 gave two preliminary reasons to doubt the singularity hypothesis. Today’s post gives three further reasons to doubt the singularity hypothesis. Together, these five reasons for doubt will place a strong burden on defenders of the singularity hypothesis to provide significant evidence in favor of their view. The remainder of this series will argue that this burden has not been met.
2. Bottlenecks
One reason why growth processes slow down is that they hit bottlenecks: single obstacles to improvement. Often a single bottleneck is enough to bring growth to a standstill until the bottleneck can be resolved.
A recent paper by Philippe Aghion and colleagues (2017) uses economic modeling to illustrate how bottlenecks can arise in the growth of computing capacities, and what their effects might be. By way of illustration, Aghion and colleagues offer the following example:
Machine computation speeds have increased by a factor of about 1011 since World War II. By contrast, power plants have seen modest efficiency gains and face limited prospects given constraints like Carnot’s theorem. This distinction is important, because with [technical condition omitted], output and growth end up being determined not by what we are good at, but by what is essential but hard to improve.
In this example, the potential inability of power generation to keep pace with growth in computing capacities forces growth in computing outputs to crawl along until energy needs can be met.
Aghion and colleagues’ point is quite general. Even if computing technology were to advance to the point where rapid increases in the intelligence of artificial agents were technologically feasible, those increases might not be realized until bottlenecks were resolved.
One class of bottleneck involves the physical resource constraints discussed in the next section. To rapidly grow the intelligence of artificial agents, we might well have to do all of the following (and plausibly a good bit more):
- Substantially expand our capacity to generate energy.
- Substantially expand our capacity to distribute energy through the electrical grid.
- Procure key materials, including rare metals which must be mined and computer components which must be manufactured.
- Build manufacturing capacity in capital-intensive industries such as semiconductor manufacturing, by building new plants, hiring and training new laborers, and securing relevant machinery.
In the next section, I will ask whether this could reasonably be done on the scale required to ground the singularity hypothesis. But here, I want to ask a more modest question: how confident are we that all of this could be done quickly enough to ground an intelligence explosion? Are we meant, for example, to see an increase in five orders of magnitude in our manufacturing, mining and energy generation/distribution capabilities over a space of months or years? How might this increase come about?
Other classes of bottlenecks may also begin to pinch. For example, if software improvements are meant to play a large role in driving the intelligence explosion, then we need to be in a position to speed up all major components of our best algorithms. It’s not so clear we can do that. For example, many computer scientists think that a good number of search algorithms are within sight of optimal performance. There is certainly room to speed them up, but how confident are we that we can quickly find a way to make all relevant search processes orders of magnitude faster than they are now?
The trouble with bottlenecks is that it only takes one bottleneck to slow growth. Placing high confidence in the singularity hypothesis requires placing high confidence in our ability to simultaneously avoid or quickly overcome all possible bottlenecks to growth. That is a tall order.
3. Physical constraints
As the capabilities of artificial agents grow skywards, we confront not merely temporary bottlenecks, but also relatively more permanent physical constraints, including resource constraints and fundamental physical laws.
Begin with resource constraints. The problem is not merely that it could take time to mine more metals, hire more laborers, and find more fossil fuels to burn. The problem is that in trying to produce artificial agents many orders of magnitude more intelligent than current systems, we might actually run out of metals to mine, laborers to hire, and fossil fuels to burn. Of course, that is not to say that there might not be enough materials somewhere in the known universe to meet resource demands. But if, for example, deep-space mining were required to build enough chips to produce radical superintelligence, then it would be relatively less plausible to expect radical superintelligence to emerge on a fast timeline.
Here is a partial list of some of the most troubling resource constraints:
- Energy generation: Computation requires enormous amounts of energy. These demands are great enough that Sam Altman has said that even AGI, let alone superintelligence, is likely to require a fundamental breakthrough in energy generation, because existing energy resources may be greatly insufficient.
- Computer chips: Computation requires hardware, and in particular, it requires chips. As the recent chip shortage has shown, manufacturing enough chips to meet global demand is far from trivial. Sam Altman has recently sought as much as five to seven trillion dollars to manufacture enough chips to bring about AGI. That is not a cheap price tag, and it raises the question of how much might be needed on more pessimistic estimates of the remaining gap to AGI, or even on optimistic estimates of the gap to superintelligence.
- Minerals: Existing chips use scarce minerals such as silicon (for transistors) and gold (as a conductor), as well as a variety of rare earth metals. Both the supply of these minerals and our ability to extract them are deeply limited, and it is not clear how. we might scale up production by many orders of magnitude in a short time scale.
Many more resource constraints could be listed, but perhaps this is enough to make the point.
Turn next to physical laws. As we push our oldest and most successful tricks to their limits, fundamental physical phenomena that were previously irrelevant emerge as sizable obstacles to progress. Two examples deserve particular mention.
First, our smallest transistors are now about ten times as wide as a typical atom. We don’t currently have the capacity to build transistors much smaller than this. Even if we learn to overcome the fiendish difficulty of building ever-smaller transistors, these shrinking transistors will eventually be affected by microphysical phenomena such as quantum uncertainty that previously washed out for all relevant purposes, and which we would really rather not deal with.
Second, feeding more and more energy into computer chips generates increasing need for effective heat dissipation. For many years, heat dissipation advanced in lockstep with changes to the density of chips, but increasingly this is no longer the case, and many worry that the difficulty of dissipating large amounts of concentrated heat will pose a fundamental obstacle to current paradigms for increasing computing capabilities.
It is certainly possible that all of these physical constraints will be in principle superable, and that they can be overcome quickly enough that they will not emerge as bottlenecks to growth. But that view needs an argument, because there are identifiable reasons to suspect that physical constraints are emerging, and will continue to emerge as barriers to growth.
4. Sublinearity of intelligence growth in accessible improvements
Improvements in machine intelligence are driven by hardware and software improvements. I focus on hardware improvements in this section, since they are more easily measured, though I suspect similar conclusions could be drawn for software improvements.
What is the relationship between hardware improvements and improvements in machine intelligence? Suppose that the relationship were roughly linear. Then exponential growth in machine intelligence would require exponential growth in hardware capacities, and hyperbolic growth in machine intelligence would require hyperbolic hardware growth. This still would not be good for the singularity hypothesis. On this model, even an indefinite continuation of Moore’s law (which we saw in Part 2 to be implausible) would yield only exponential intelligence growth, and hyperbolic intelligence growth would require previously unheard-of hyperbolic hardware growth.
However, I think that matters are a bit worse than that. In this section, I argue that machine intelligence plausibly grows sublinearly in hardware capacities. If that is true, then even exponential growth in machine intelligence may require superexponential growth in hardware capacities. And hyperbolic growth? Not easy!
It is relatively unproblematic to measure hardware growth: most traditional measures such as transistor count and FLOPS show consistent exponential hardware growth over the past half-century or so. Has this translated into exponential intelligence growth? That is harder to say, since we would need a measure of machine intelligence, and there is no agreed-upon measure of machine intelligence.
I don’t think we need to choose a measure. I think that all, or nearly all measures of machine intelligence have grown subexponentially during a period of exponential hardware growth. If that is right, then machine intelligence has for some time grown subexponentially in hardware capacities, which gives us some reason to suspect future growth to be subexponential as well.
By way of illustration, Neil Thompson and colleagues look at performance gains in two areas of computer game-play (Chess and Go) as well as three areas that might be thought to depend a great deal on intelligence (protein folding, weather prediction, modeling underground oil reservoirs). I think these may be good illustrative measures of intelligence, but readers are welcome to replace them with their preferred alternative measure and repeat the analysis.
Thompson and colleagues find that recent exponential growth in hardware capacities has led only to linear growth in all five intelligence proxies. Here, for example, are their data for weather prediction: note that predictive improvement (reduced loss) is log-linear in compute growth, i.e. predictive improvement proceeds linearly against a background of exponential compute growth.

Similarly, Thompson and colleagues find a clear log-linear relationship between chess ELO and the computing power of chess engines:

If these and other similar measures are good proxies for intelligence, then intelligence looks to have grown log-linearly rather than linearly in hardware capacities for some time. That would not be good news for the singularity hypothesis.
5. Taking stock
Part 2 of this series gave two reasons for skepticism about the singularity hypothesis: (1) extraordinary claims require extraordinary evidence, and (2) diminishing research productivity threatens gains from self-improvement.
Today’s post gave three further reasons for skepticism: (3) any of a number of bottlenecks may halt growth, (4) resource constraints and fundamental physical laws emerge as obstacles to growth, and (5) intelligence may grow sublinearly in accessible hardware or software improvements, so that rapid growth in intelligence may require extremely rapid growth in these underlying quantities.
Together, these five reasons for skepticism place a strong burden on defenders of the singularity hypothesis to provide significant evidence in favor of their view. If that evidence cannot be produced, then we should not assign significant credence to the singularity hypothesis.
The remainder of this series will argue that this burden has not been met. As a result, I will conclude that we should not assign significant credence to the singularity hypothesis.

Leave a Reply