We present evidence from various industries, products and firms showing that research effort is rising substantially while research productivity is declining sharply. A good example is Moore’s Law. The number of researchers required today to achieve the famous doubling of computer chip density is more than 18 times larger than the number required in the early 1970s. More generally, everywhere we look we find that ideas, and the exponential growth they imply, are getting harder to find.
Bloom et al., “Are ideas getting harder to find?“
1. Recap
This is Part 2 of a series based on my paper “Against the singularity hypothesis“.
The singularity hypothesis begins with the assumption that artificial agents will gain the ability to improve their own intelligence. From there, the singularity hypothesis holds that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion: an event in which artificial agents rapidly become orders of magnitude more intelligent than their human creators.
Part 1 introduced and clarified the singularity hypothesis. Part 2 (today) and Part 3 (next time) give preliminary reasons to doubt the singularity hypothesis. I’ll then examine recent defenses of the singularity hypothesis and argue that they are insufficient to overcome the case against the singularity hypothesis. Finally, I will draw implications from this discussion.
2. Five reasons for skepticism
Many scholars are quite skeptical of the singularity hypothesis. Why are we skeptical? In this post and the next, I outline five reasons to be skeptical of the singularity hypothesis, focusing on the hypothesis’ ambitious growth claims.
None of these reasons will imply that the singularity hypothesis is physically impossible, or that the singularity hypothesis could not be vindicated by a series of knockdown arguments. However, together these reasons for skepticism combine to place a burden on defenders of the singularity hypothesis to produce strong evidence for their claims. Later in this series, I will argue that this evidential burden has yet to be met.
Today’s post develops the first two of five reasons to be skeptical of the singularity hypothesis.
3. Extraordinary claims require extraordinary evidence
Part 6 of my series on epistemics discussed the need for extraordinary claims to be supported by extraordinary evidence. Because extraordinary claims are not, on their face, especially plausible, it takes a good deal of evidence to raise them to the level of plausibility, and still more to show that they are likely to be true.
The same discussion also showed that effective altruists have not always held fast to the requirement of providing extraordinary evidence to support extraordinary claims. For example, we saw that Robin Hanson defends his claim that some reported UFO sightings are genuine primarily by telling his readers that if they don’t know why standard explanations for UFO sightings fail, they haven’t been paying attention. Providing scant support for extraordinary claims is not good epistemic practice, and should not be repeated.
The singularity hypothesis makes at least two extraordinary claims. First, it claims that the general intelligence of artificial agents will experience a sustained period of accelerating growth. Second, it claims that the period of accelerating growth will terminate in a fundamental discontinuity of human history after which, on leading views, artificial agents have become at least 2-3 orders of magnitude more intelligent than the average human.
To support these extraordinary claims, it is not enough to show that they are physically possible, to challenge opponents to disprove them, or to provide a few suggestive pieces of evidence in their support. Advocates of the singularity hypothesis owe us many excellent reasons to treat the hypothesis as a live hypothesis deserving of serious consideration. Later in this series, I will show that many leading treatments of the singularity hypothesis fall considerably short of meeting this argumentative burden.
4. Diminishing research productivity
In the 1950s, a billion (inflation-adjusted) dollars invested in research produced over forty FDA-approved drugs. In the 2000s, an inflation-adjusted billion dollars produced fewer than one FDA-approved drug (Scannell et al. 2012).
From 1971 to 1991, developed nations increased their inflation-adjusted expenditures on agricultural research by over sixty percent. Despite this rising expenditure, growth in crop yields per acre dropped by fifteen percent (Alston et al. 2000).
The same story repeats across many sectors of society today (Gordon 2016). The root cause is not that researchers are becoming lazy or incompetent. It is rather that good ideas are becoming harder to find. Pharmaceutical researchers in the 1950s and agricultural researchers in the 1970s plucked much of the low-hanging fruit, leaving their future colleagues to chase more difficult game. In a phrase, good ideas became harder to find.
The underlying dynamic is usually illustrated by comparing the research process to fishing without replacement from a large pond. At the outset, it is easy to catch fish. The easiest fish to catch may fly to your hook as soon as it is cast. But as time goes on, catching fish becomes harder. That’s not because you’ve gotten worse at fishing, but simply because the easiest fish have already been caught. The fish that remain are much harder to catch.
Social scientists think that beyond a point, almost all research processes behave like fishing: as the easy fish are caught and the low-hanging fruit plucked, good ideas become increasingly harder to find (Bloom et al. 2020). A bit more formally, define research productivity to be the amount of research inputs required to yield a fixed research output. The phenomenon at issue is that as fields mature, research productivity tends to decline.
For example, a recent analysis in American Economic Review models a 41-fold drop in research productivity across the US economy since the 1930s.

Similar results are found by other studies.
Declining research productivity would be a bad thing for the singularity hypothesis. If each doubling of intelligence is harder to bring about than the last, then even if all AI research is eventually done by recursively self-improving AI systems, the pace of doubling will steadily slow. By contrast, the singularity hypothesis says that the pace of doubling should accelerate, not slow.
Some readers might object that diminishing research productivity applies only to human researchers. When artificial agents are doing the research, research productivity will no longer diminish. There is, perhaps, a sliver of truth to this objection. One cause of diminishing research productivity is the cost of maintaining knowledge stocks. As humanity’s knowledge stock grows, we humans must invest ever-increasing fractions of resources in storing accumulated knowledge and transmitting that knowledge to subsequent generations. However, artificial agents find it much easier to store large knowledge stocks, and transferring knowledge to successive artificial agents may be a simple matter of copying and pasting.
However, the main factor driving diminishing research productivity is not the difficulty of maintaining large knowledge stocks. It is instead the fact that as time goes on, low-hanging fruit begins to be used up and good ideas become harder to find. This is a feature of research problems, not a feature of the agents doing the research. As a result, this underlying dynamic should be largely unchanged by letting artificial agents replace humans as researchers. Artificial agents use up low-hanging fruit just like humans do.
Perhaps it might be objected that the problem of improving artificial agents is an exception to the general phenomenon of diminishing research productivity. After all, Moore’s law held for many years, producing a biannual doubling of hardware capacities measured by the number of transistors on a dense integrated circuit. We could hardly have kept up such a consistent and breakneck pace of hardware improvement in the face of diminishing research productivity, right?
Actually, that is exactly the opposite of what economists think. In what is perhaps the best-known recent study of diminishing research productivity, Nicholas Bloom and colleagues (2020) single out Moore’s law as “perhaps the best example” of diminishing research productivity, using a section-length case study of Moore’s law to illustrate the ubiquity of the phenomenon of diminishing research productivity. In their preferred model, they find an 18-fold decrease in hardware research productivity from 1971-2014, and four auxiliary models offer estimates between an 8-fold and a 352-fold drop over the same period.

How, then, has Moore’s law been sustained? We sustained Moore’s law despite rapidly diminishing research productivity by pumping ever-increasing amounts of money into research. The problem with this approach is that we can’t keep Moore’s law alive indefinitely by throwing money at it: eventually, we will run out of money. The average semiconductor plant already costs over ten billion dollars to build. There isn’t that much more money for us to spend.
Indeed, experts are mostly divided between two views about Moore’s law: the view that Moore’s law will end in this decade, and the view that Moore’s law has already ended (Mack 2011, Shalf 2020, Theis and Wong 2017, Waldrop 2016). At least when we restrict attention to classic forms of hardware progress such as Moore’s law, experts do not think that Moore’s law is an exception to the problem of diminishing research productivity. Moore’s law is rather a prime example of the problem.
The upshot is that we have good theoretical reason to expect diminishing research productivity to take a bite out of progress on many research problems. At best, we have no special reason to think that the problem of improving artificial agents is an exception, and at worst, the recent history of Moore’s law suggests that diminishing research productivity may have set in long ago.
5. Conclusion
This post reviewed two of five preliminary reasons for skepticism about the singularity hypothesis. The first reason for skepticism is that the singularity hypothesis is an extraordinary claim, so it is appropriate to be skeptical unless we are given extraordinary evidence.
The second reason for skepticism is that many social scientists expect declining research productivity in most domains, including artificial intelligence. Unless more is said, this predicts a longer, not shorter doubling time for the future intelligence of artificial agents, leading to decelerating rather than accelerating growth.
Part 3 of this series will round out this discussion with three more preliminary reasons for skepticism about the singularity hypothesis.

Leave a Reply