This paper defends the claim that mere temporal proximity always and without exception strengthens certain moral duties, including the duty to save – call this view Robust Temporalism. Although almost all other moral philosophers dismiss Robust Temporalism out of hand, I argue that it is prima facie intuitively plausible, and that it is analogous to a view about special obligations that many philosophers already accept. I also defend Robust Temporalism against several common objections, and I highlight its relevance to a number of practical policy debates, including longtermism. My conclusion is that Robust Temporalism is a moral live option, that deserves to be taken much more seriously in the future.
Harry Lloyd, “Time discounting, consistency and special obligations: a defence of Robust Temporalism“
1. Series introduction
The purpose of this blog is to use academic research to drive positive change within and around the effective altruism movement. One way to do that is to construct arguments on the basis of academic research. But another way to drive positive change is to let the research speak for itself.
This series, Papers I learned from, highlights papers that have informed my own thinking and draws attention to what might follow from them. I will do my best to present papers in a rigorous but accessible way, opening up the contents of each paper to a wider audience.
For the most part, I want to take my own voice out of this series. This isn’t a series about papers I agree or disagree with. It’s a series about papers that I learned from. I want to highlight what the papers say and what might be learned from them.
2. A defence of robust temporalism
Harry Lloyd is a PhD student in the philosophy department at Yale University, a Research Affiliate at the Center for AI Safety, and a former Global Priorities Fellow at the Global Priorities Institute, Oxford.
Lloyd’s paper, “Time discounting, consistency and special obligations: a defence of Robust Temporalism,” won the Essay Prize for Global Priorities Research in 2021. The paper helped many in the field to think seriously about a position which previously was not taken as seriously: that we might have stronger moral duties towards agents who are nearer in time to us than we do towards agents who are further away.
Philosophers have, as a rule, been quite hostile to this thought. For example, Simon Caney writes:
A person’s place in time is not, in itself, the right kind of feature of a person to affect his/her entitlements. For example, it does not make someone more or less deserving or meritorious. Similarly, it does not, in itself, make anyone’s needs more or less pressing … It is not the right kind of property to confer on people extra or reduced status.
And Tyler Cowen and Derek Parfit wonder:
Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your twenty-first birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?
By contrast, many other fields are friendlier to the idea that moral duties decline with temporal distance. Economists standardly discount the value of future welfare, not merely due to uncertainty that this welfare will be realized, but also because it lies in the future. Famously, the rate at which future welfare is discounted may underly one of the most famous disagreements in the economics of climate change. Similarly, almost all cost-effectiveness models used by policymakers today apply a `pure’ rate of temporal discounting, assigning less value to benefits reaped in the future merely because they lie in the future.
Lloyd asks us to consider whether the philosophers may have been too hasty in their opposition to some types of pure temporal discounting. Let’s start by formulating the view that Lloyd defends.
3. Robust temporalism
Lloyd uses robust temporalism to denote the view that “mere temporal proximity always and without exception strengthens certain moral duties, including the duty to save.” For example, robust temporalists think that other things equal, I (now) have a stronger moral duty to save a drowning child today than to save a child from drowning next year.
This view has a few features that are worth noting. First, this is a deontic theory – a theory about the strength of moral duties. It isn’t an axiological theory, about the values of outcomes. Robust temporalism does not say that future welfare is less valuable than present welfare, but only that we have stronger duties to save present people than we have to save future people. In this sense, robust temporalism may diverge from the approach taken by economists and policymakers who apply a discount rate directly to the value of future welfare.
Second, this is an exceptionless theory. Robust temporalists think that temporal proximity “always and without exception” strengthens duties to save. This contrasts with accounts such as Andreas Mogensen’s discussion of kinship-based discounting, on which relationships such as kinship may ground temporal discounting, but only in cases where they are present.
With these clarifications in mind, let’s look at Lloyd’s presentation of the robust temporalist view.
4. Regions of moral concern
On most nonconsequentialist moral theories, agents have special obligations to the people they stand in morally significant relationships with. For example, we may have strengthened moral duties towards family, friends, teachers, compatriots, or those we have wronged in the past. Lloyd suggests that temporal proximity belongs on the list of morally significant relationships that modulate the strength of moral duties.
Lloyd begins with the notion of a region of moral concern. For Lloyd, an agent’s region of moral concern is the region of spacetime that it is nomologically possible for them to influence (that is, possible given the laws of nature).

Regions of moral concern are so-called because on most views, agents could not have duties to affect anything outside of their region of moral concern. After all, agents cannot affect anything outside of their region of moral concern, and it is strange for agents to have duties to do what they cannot do.
Agents in different places and times often have a great deal of overlap in their regions of moral concern. This, Lloyd suggests, means that those agents share a moral burden for their regions of shared moral concern. Lloyd argues that this shared moral burden is a morally significant relationship, grounding special obligations in just the same way as other morally significant relationships such as kinship or friendship. That is, other things equal, the strength of some moral duties including the duty to save increases in the overlap between agents’ regions of moral concern.
We have strong duties towards those near to us in time and space, because our regions of moral concern mostly overlap theirs. By contrast, on Lloyd’s view we have weaker duties towards those further from us in time and space, because our regions of moral concern have less overlap with theirs.
One nice feature of this approach is that it seeks to normalize the moral importance of temporal distance by assimilating it to a familiar model of morally significant relationships grounding special obligations. Many philosophers have thought that there is something especially arbitrary about robust temporalism. If Lloyd is right, then there may be no more arbitrariness in favoring those near to us in time and space than there is in favoring those who happen to be our friends or kin.
5. Spatial discounting
Many critics of robust temporalism have thought any defense of the idea that moral duties diminish with temporal distance would also imply that moral duties diminish with spatial distance.
That might not be ideal. In particular, it is not a moral thought that I would like to inject into discussions of effective altruism. In a context where wealthy audiences located primarily in the global north are tasked with the disbursement of resources for the benefit of others, the very last thing I would like to say is that ceteris paribus, those audiences have a stronger obligation to benefit those suffering in the global north than to benefit those in the global south. One of the most attractive features of Lloyd’s theory, to my mind, is that it shows how it is possible to let moral duties diminish with temporal distance but not with spatial distance.
To see the point, suppose that an agent J now is considering her obligations towards two agents located at the same time t, possibly later than now. Suppose that those agents are located at different spatial coordinates v, w, with one further away from the agent J’s position than the other.

J’s region of moral concern overlaps the first agent’s region of moral concern in areas A,B and overlaps the second agent’s region of moral concern in areas B,C. Hence J’s region of moral concern cannot overlap one agent’s more than the others unless A and C differ in size. But almost any plausible theory and measure will treat A,C as the same size, meaning that J’s region of moral concern overlaps both agents’ regions of moral concern equally, and hence that J’s moral duties towards one agent are not diminished moral than her duties towards the other.
This means that nothing in robust temporalism forces us to discount merely for spatial location. That is a very good thing.
6. Temporal consistency
Theories of temporal discounting face a bit of a nasty pickle. On the one hand, we’d like them to be time-consistent: we don’t want the comparative strengths of duties to flip-flop as we move forwards and backwards throughout our lives. On the other hand, we’d like an asymptotic limit on how far the strengths of our moral duties decay. It would be bad if looking arbitrarily far into the future could reveal people to whom we had arbitrarily weak moral duties. Surely temporal location cannot matter that much.
The problem is that we cannot have both. I’ll present the problem using Lloyd’s notation, which is the best notation I know of for making the point in deontic rather than axiological terms.
Let denote the discount factor on duties to save agents, decreasing in the temporal distance t between ourselves and those agents. Let
represent the undiscounted strength of our duty to save
people. Bracketing some technicalities, duties are time-consistent if they aren’t changed by adding some constant amount of time i into the mix. That is, if
, then also for any i we should have
.
Here’s a sad theorem. Time consistency is satisfied only by exponential discount rates for some
. That’s sad, because exponential discount rates behave like this:

This puts no asymptotic floor on the amount of discounting that can be achieved as we move sufficiently far out in time. We’d like a view that behaves more like this, or perhaps a fancier version thereof:

To get this, we need to violate time consistency. What to do?
For Lloyd, the answer is simple. Time consistency is nice. But future people matter a lot more than any time consistent view says they do. So Lloyd suggests we should reject time consistency.
7. Implications for longtermism
I’ve said many times that I don’t believe there is any compelling one-shot objection to longtermism. Challenges to longtermism should aim to diminish the claimed moral importance of duties to benefit future people, not drive them down to nothing in one go.
A nice feature of Lloyd’s asymptotic approach is that it does just that. Because robust temporalism makes the strength of moral duties decay with temporal distance, it does tend to make things harder for longtermists. However, because robust temporalism makes the strength of moral duties plateau at a non-zero asymptote, robust temporalism does not deny that duties to save future people can outweigh duties to save present people, even if those future people are very far away, so long as there are enough of them.
On a robust temporalist approach, two questions assume fundamental moral significance for the longtermist. First, just how far can the strength of duties to save future people decay? Ten times? a hundred? A thousand? The lower the eventual asymptote, the harder it will be to ground strong overall duties to save future people.
Second, just how fast does the strength of duties to save future people decay? If duties to save future people decay quite quickly in strength, then robust temporalism will take a good bite out of longtermist duties, even for those who think that humanity faces high levels of existential risk in the next few centuries. By contrast, if duties to save future people decay more slowly over time, then intramural debates among longtermists about the temporal location of those who need our help will assume a great deal of moral importance.
8. Wrapping up
I am not a robust temporalist. I am a consequentialist. I don’t have much room for special obligations in my normative theories, so it will surprise nobody to learn that I don’t want to add a new type of special obligation to my normative theories.
However, I have to say that Lloyd’s discussion considerably improved my opinion of robust temporalism. Before engaging with Lloyd’s paper, I shared the opinion of many philosophers that there could be no serious moral basis for pure temporal discounting, and that the idea was popular only among social scientists who needed it for technical mathematical reasons.
I now think that was a mistake. There is a surprising amount that can be said in favor of robust temporalism, enough so that I think there is a good case for those interested in longtermism to take robust temporalism seriously as a relevant and plausible moral theory.

Leave a Reply