Papers I learned from (Part 1: Time discounting, consistency and special obligations)

This paper defends the claim that mere temporal proximity always and without exception strengthens certain moral duties, including the duty to save – call this view Robust Temporalism. Although almost all other moral philosophers dismiss Robust Temporalism out of hand, I argue that it is prima facie intuitively plausible, and that it is analogous to a view about special obligations that many philosophers already accept. I also defend Robust Temporalism against several common objections, and I highlight its relevance to a number of practical policy debates, including longtermism. My conclusion is that Robust Temporalism is a moral live option, that deserves to be taken much more seriously in the future.

Harry Lloyd, “Time discounting, consistency and special obligations: a defence of Robust Temporalism
Listen to this post

1. Series introduction

The purpose of this blog is to use academic research to drive positive change within and around the effective altruism movement. One way to do that is to construct arguments on the basis of academic research. But another way to drive positive change is to let the research speak for itself.

This series, Papers I learned from, highlights papers that have informed my own thinking and draws attention to what might follow from them. I will do my best to present papers in a rigorous but accessible way, opening up the contents of each paper to a wider audience.

For the most part, I want to take my own voice out of this series. This isn’t a series about papers I agree or disagree with. It’s a series about papers that I learned from. I want to highlight what the papers say and what might be learned from them.

2. A defence of robust temporalism

Harry Lloyd is a PhD student in the philosophy department at Yale University, a Research Affiliate at the Center for AI Safety, and a former Global Priorities Fellow at the Global Priorities Institute, Oxford.

Lloyd’s paper, “Time discounting, consistency and special obligations: a defence of Robust Temporalism,” won the Essay Prize for Global Priorities Research in 2021. The paper helped many in the field to think seriously about a position which previously was not taken as seriously: that we might have stronger moral duties towards agents who are nearer in time to us than we do towards agents who are further away.

Philosophers have, as a rule, been quite hostile to this thought. For example, Simon Caney writes:

A person’s place in time is not, in itself, the right kind of feature of a person to affect his/her entitlements. For example, it does not make someone more or less deserving or meritorious. Similarly, it does not, in itself, make anyone’s needs more or less pressing … It is not the right kind of property to confer on people extra or reduced status.

And Tyler Cowen and Derek Parfit wonder:

Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your twenty-first birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?

By contrast, many other fields are friendlier to the idea that moral duties decline with temporal distance. Economists standardly discount the value of future welfare, not merely due to uncertainty that this welfare will be realized, but also because it lies in the future. Famously, the rate at which future welfare is discounted may underly one of the most famous disagreements in the economics of climate change. Similarly, almost all cost-effectiveness models used by policymakers today apply a `pure’ rate of temporal discounting, assigning less value to benefits reaped in the future merely because they lie in the future.

Lloyd asks us to consider whether the philosophers may have been too hasty in their opposition to some types of pure temporal discounting. Let’s start by formulating the view that Lloyd defends.

3. Robust temporalism

Lloyd uses robust temporalism to denote the view that “mere temporal proximity always and without exception strengthens certain moral duties, including the duty to save.” For example, robust temporalists think that other things equal, I (now) have a stronger moral duty to save a drowning child today than to save a child from drowning next year.

This view has a few features that are worth noting. First, this is a deontic theory – a theory about the strength of moral duties. It isn’t an axiological theory, about the values of outcomes. Robust temporalism does not say that future welfare is less valuable than present welfare, but only that we have stronger duties to save present people than we have to save future people. In this sense, robust temporalism may diverge from the approach taken by economists and policymakers who apply a discount rate directly to the value of future welfare.

Second, this is an exceptionless theory. Robust temporalists think that temporal proximity “always and without exception” strengthens duties to save. This contrasts with accounts such as Andreas Mogensen’s discussion of kinship-based discounting, on which relationships such as kinship may ground temporal discounting, but only in cases where they are present.

With these clarifications in mind, let’s look at Lloyd’s presentation of the robust temporalist view.

4. Regions of moral concern

On most nonconsequentialist moral theories, agents have special obligations to the people they stand in morally significant relationships with. For example, we may have strengthened moral duties towards family, friends, teachers, compatriots, or those we have wronged in the past. Lloyd suggests that temporal proximity belongs on the list of morally significant relationships that modulate the strength of moral duties.

Lloyd begins with the notion of a region of moral concern. For Lloyd, an agent’s region of moral concern is the region of spacetime that it is nomologically possible for them to influence (that is, possible given the laws of nature).

Regions of moral concern, from Lloyd (2021).

Regions of moral concern are so-called because on most views, agents could not have duties to affect anything outside of their region of moral concern. After all, agents cannot affect anything outside of their region of moral concern, and it is strange for agents to have duties to do what they cannot do.

Agents in different places and times often have a great deal of overlap in their regions of moral concern. This, Lloyd suggests, means that those agents share a moral burden for their regions of shared moral concern. Lloyd argues that this shared moral burden is a morally significant relationship, grounding special obligations in just the same way as other morally significant relationships such as kinship or friendship. That is, other things equal, the strength of some moral duties including the duty to save increases in the overlap between agents’ regions of moral concern.

We have strong duties towards those near to us in time and space, because our regions of moral concern mostly overlap theirs. By contrast, on Lloyd’s view we have weaker duties towards those further from us in time and space, because our regions of moral concern have less overlap with theirs.

One nice feature of this approach is that it seeks to normalize the moral importance of temporal distance by assimilating it to a familiar model of morally significant relationships grounding special obligations. Many philosophers have thought that there is something especially arbitrary about robust temporalism. If Lloyd is right, then there may be no more arbitrariness in favoring those near to us in time and space than there is in favoring those who happen to be our friends or kin.

5. Spatial discounting

Many critics of robust temporalism have thought any defense of the idea that moral duties diminish with temporal distance would also imply that moral duties diminish with spatial distance.

That might not be ideal. In particular, it is not a moral thought that I would like to inject into discussions of effective altruism. In a context where wealthy audiences located primarily in the global north are tasked with the disbursement of resources for the benefit of others, the very last thing I would like to say is that ceteris paribus, those audiences have a stronger obligation to benefit those suffering in the global north than to benefit those in the global south. One of the most attractive features of Lloyd’s theory, to my mind, is that it shows how it is possible to let moral duties diminish with temporal distance but not with spatial distance.

To see the point, suppose that an agent J now is considering her obligations towards two agents located at the same time t, possibly later than now. Suppose that those agents are located at different spatial coordinates v, w, with one further away from the agent J’s position than the other.

J’s region of moral concern overlaps the first agent’s region of moral concern in areas A,B and overlaps the second agent’s region of moral concern in areas B,C. Hence J’s region of moral concern cannot overlap one agent’s more than the others unless A and C differ in size. But almost any plausible theory and measure will treat A,C as the same size, meaning that J’s region of moral concern overlaps both agents’ regions of moral concern equally, and hence that J’s moral duties towards one agent are not diminished moral than her duties towards the other.

This means that nothing in robust temporalism forces us to discount merely for spatial location. That is a very good thing.

6. Temporal consistency

Theories of temporal discounting face a bit of a nasty pickle. On the one hand, we’d like them to be time-consistent: we don’t want the comparative strengths of duties to flip-flop as we move forwards and backwards throughout our lives. On the other hand, we’d like an asymptotic limit on how far the strengths of our moral duties decay. It would be bad if looking arbitrarily far into the future could reveal people to whom we had arbitrarily weak moral duties. Surely temporal location cannot matter that much.

The problem is that we cannot have both. I’ll present the problem using Lloyd’s notation, which is the best notation I know of for making the point in deontic rather than axiological terms.

Let \Delta_{\text{save}}(t) denote the discount factor on duties to save agents, decreasing in the temporal distance t between ourselves and those agents. Let \mu_{\text{save}}(n_s) represent the undiscounted strength of our duty to save n_s people. Bracketing some technicalities, duties are time-consistent if they aren’t changed by adding some constant amount of time i into the mix. That is, if \Delta_{\text{save}}(s) \mu_{\text{save}}(n_s) > \Delta_{\text{save}}(t) \mu_{\text{save}}(n_t), then also for any i we should have \Delta_{\text{save}}(s+i) \mu_{\text{save}}(n_s) > \Delta_{\text{save}}(t+i) \mu_{\text{save}}(n_t).

Here’s a sad theorem. Time consistency is satisfied only by exponential discount rates \Delta_{\text{save}}(t) = \delta^t for some \delta. That’s sad, because exponential discount rates behave like this:

Exponential temporal discounting, from Lloyd (2021)

This puts no asymptotic floor on the amount of discounting that can be achieved as we move sufficiently far out in time. We’d like a view that behaves more like this, or perhaps a fancier version thereof:

Exponential temporal discounting, from Lloyd (2021)

To get this, we need to violate time consistency. What to do?

For Lloyd, the answer is simple. Time consistency is nice. But future people matter a lot more than any time consistent view says they do. So Lloyd suggests we should reject time consistency.

7. Implications for longtermism

I’ve said many times that I don’t believe there is any compelling one-shot objection to longtermism. Challenges to longtermism should aim to diminish the claimed moral importance of duties to benefit future people, not drive them down to nothing in one go.

A nice feature of Lloyd’s asymptotic approach is that it does just that. Because robust temporalism makes the strength of moral duties decay with temporal distance, it does tend to make things harder for longtermists. However, because robust temporalism makes the strength of moral duties plateau at a non-zero asymptote, robust temporalism does not deny that duties to save future people can outweigh duties to save present people, even if those future people are very far away, so long as there are enough of them.

On a robust temporalist approach, two questions assume fundamental moral significance for the longtermist. First, just how far can the strength of duties to save future people decay? Ten times? a hundred? A thousand? The lower the eventual asymptote, the harder it will be to ground strong overall duties to save future people.

Second, just how fast does the strength of duties to save future people decay? If duties to save future people decay quite quickly in strength, then robust temporalism will take a good bite out of longtermist duties, even for those who think that humanity faces high levels of existential risk in the next few centuries. By contrast, if duties to save future people decay more slowly over time, then intramural debates among longtermists about the temporal location of those who need our help will assume a great deal of moral importance.

8. Wrapping up

I am not a robust temporalist. I am a consequentialist. I don’t have much room for special obligations in my normative theories, so it will surprise nobody to learn that I don’t want to add a new type of special obligation to my normative theories.

However, I have to say that Lloyd’s discussion considerably improved my opinion of robust temporalism. Before engaging with Lloyd’s paper, I shared the opinion of many philosophers that there could be no serious moral basis for pure temporal discounting, and that the idea was popular only among social scientists who needed it for technical mathematical reasons.

I now think that was a mistake. There is a surprising amount that can be said in favor of robust temporalism, enough so that I think there is a good case for those interested in longtermism to take robust temporalism seriously as a relevant and plausible moral theory.

Comments

8 responses to “Papers I learned from (Part 1: Time discounting, consistency and special obligations)”

  1. Richard Y Chappell Avatar

    It’s a formally interesting proposal, but I don’t think anyone should take seriously the core normative claim that “shared moral burden” grounds special obligations. That would imply maximal discounting of the interests of pure moral patients, such as the severely cognitively disabled.

    That is, this argument for robust temporalism has the implication that we should care more about far-future people (who at least share *some* moral burdens with us) than about present severely cognitive disabled individuals. I don’t think anyone should believe that.

    1. Richard Y Chappell Avatar

      Perhaps also worth noting that wealthy people obviously have “shared moral burdens” that poor people lack, simply in virtue of the greater resources at their disposal. So the purely formal point about avoiding spatial discounting doesn’t help with the more general problem of the theory implying that privileged communities should discount the interests of those who are less privileged.

      This is really core to the nature of the theory at hand, so I think you should update your post (and your opinion) to reflect this shortcoming.

      1. Harry R. Lloyd Avatar

        Thanks for engaging with the post Richard. I should mention that David is writing about an early, working paper draft of this article, so some of my turns of phrase — like “shared moral burden” — are probably infelicitous. To clarify: the idea is not supposed to be that if you and I have a shared moral obligation to donate $10,000 to Oxfam, then you and I have a “shared moral burden” that strengthens our obligations to each other (relative to our obligations to people who do not have $10,000). Rather, the rough idea is that there there is a morally salient difference between my relationship with someone who needs aid now and someone who needs aid in 100 years time, viz. that the laws of nature preclude me from sharing a range of moral duties with the future needy person that they do not preclude me from sharing with the presently needy person. And in order to erase that difference, you’d have to change a ontologically pretty basic fact about me, viz. my spatiotemporal position.

        As for duties to the severely cognitively impaired: people like Kagan have pointed out there are interesting modal facts about cognitively impaired *humans* that might mitigate other reasons to discount their interests, so let’s talk about cognitively unsophisticated animals instead. One option here is to embrace the conclusion that we *should* care more about the interests of far future people than about the interests of present-day animals. Another option — paralleling what I said in my first paragraph — is to point out that it’s not the laws of nature combined with ontologically basic facts of spatiotemporal position that preclude me from sharing moral duties with animals. Rather, this is precluded by ontologically far less basic facts about the animals’ moral capacities.

        (FWIW: I myself ultimately think that we should ultimately reject time discounting. But it’s still worthwhile to steelman the arguments in favour!)

        1. Richard Y Chappell Avatar

          Thanks Harry, that’s a helpful clarification!

          fwiw, I think the relationship of “not being precluded by the laws of nature from sharing moral duties” is intuitively a lot less eligible to count as “special” than that of straightforwardly “sharing moral duties” (and even the latter seems quite a stretch, compared to real personal relationships). So I don’t think anyone should really take this to be a “serious moral basis” for time discounting.

          Still, I very much agree with your last sentence. Many ideas are well worth exploring as an academic exercise, even if it would be deeply irrational for anyone to actually be persuaded by them. Thanks for writing the paper!

          1. Harry R. Lloyd Avatar

            Yes, I have some sympathy with this response — although I also wonder whether there are ways of presenting the relationship of “not being precluded by the laws of nature from sharing moral duties” in ways that make it seem *more* special than the ordinary grounds of special obligations, rather than less. This kind of relationship depends on pretty fundamental facts about one’s position in the basic moral landscape of the universe, rather than being grounded in tricky to pin down human practices. (Compare it, for instance, with putatively special relationships like ‘co-citizenship.’) And of course, ‘reflective equilibrium’ can work in both directions — a theoretical framework that is not highly plausible in and of itself can become more plausible if it fits well with case-based intuitions.

  2. JWS Avatar
    JWS

    Thank you for sharing this paper, David. I enjoyed reading this post and even gave the paper a quick view. I found the writing very accessible compared to many other academic papers, so well done Harry 🙂

    In Section 4 and its implications, my more consequentialist intuitions start to protest against the argument. Firstly, the use of the Minkowski Diagram format seems to implicitly assume a consequentialist framing, focusing on our causal influence on the future to ground the strength of our special obligations. But that seems very, very close to just directly caring about the consequences of our actions. The key concern, in both cases, is causal influence rather than temporal distance per se. However, this distinction might blur in edge cases. For instance, if there were a portal taking me instantaneously from point A to point C in Lloyd’s Figure #1, that would be adjacent to me, and my Minkowski diagram would appear highly convoluted from your perspective (and vice versa). This raises questions about what constitutes the ‘far future’ or how our intuitions about temporal discounting in morality conflict with exotic space-time configuartions.

    Secondly, I believe Lloyd somewhat downplays how Spatial Temporality is logically entailed by the ‘overlapping lightcones’ account he presents here. Although light takes only 1.3 seconds to travel from the moon to the earth (as shown in Lloyd’s Figure #5), the difference in moral obligation given by the lightcones of two observers will inevitably increase as our time horizon extends into the future, especially if our temporal discount rate has an asymptotic floor. Even if the discount for someone on the other side of the world is an OOM less than someone on the moon, this could lead to justifying significantly greater value for someone spatially closer than for someone marginally further away, all else being equal. (and fwiw, if humanity did establish a flourishing presence on the moon, I think our moral obligations would hold to them just as it does at the moment to those in the Global South, for example).

    Finally, my most intuitive disagreement lies in the logical progression of the argument, which doesn’t seem to properly ground what these special duties consist in, and why this is so directly tied to the overlapping volumes of lightcones. This might be because the scope of the paper as pure analytical philosophy rather than x-phi, but much criticism of longtermism’s emphasis on the moral patienthood of future people suggests that beyond a certain point in the future, many people believe (or at least implicitly argue) that our moral duties to additional moral agents diminish or become nonexistent. This is because those alive today belong to a special group (such as the ‘existing’) and thus have a special obligation beyond pure time preference. Like you, I find these arbitrary special obligations often resemble special pleading. Even if Lloyd provides a valid defense of Robust Temporalism, many critics of longtermism over the past year have not done so in the same way, imo.

    Apologies if this seems a bit rambly or if I’ve mixed things up. As you know, I’m not an academic philosopher, but I enjoy sharpening my thoughts on the papers and arguments you share on this blog. I hope you can keep it going despite the extra responsibilities of your new job. (congrats, by the way!)

    1. David Thorstad Avatar

      Thanks JWS!

      As always, I appreciate your engagement and the well-wishes. I’ll definitely keep this blog going.

      I want to let Harry take first crack at responding to this, and if he hasn’t chimed in within a day or so I’ll do my best to pretend to be Harry :).

      David

    2. Harry R. Lloyd Avatar

      Thanks for engaging with the paper JWS! I really appreciate the suggestion to consider cases in which the normal laws of physics break down (for instance cases involving portals between the present and the far future). I guess my theory predicts that if a portal opened up allowing far future people to influence the present moment, then my rationale for discounting these people’s interests would be eliminated. However, if a unidirectional portal opened up, only allowing me to influence the far future, then this would not weaken my basis for discounting. I’m really not sure what to say about these kinds of cases, but I definitely want to think more about them! (Of course, these kinds of cases illustrate that my arguments support *time* discounting only as a matter of *nomological* necessity. But this is modally much more robust than most of the other purported defences of “time” discounting, so I’m not at all bothered by this.)

      As for spatial discounting: I think that this will all depend on exactly how we precisify the notion of “degree of overlap” between regions of moral concern. The potential unboundedness of spacetime makes things difficult here. I did not attempt any such precisification in the working paper that David is citing, although I do attempt this in a more recent draft (which I’d be happy to share on request).

      I think your final remarks are well taken. My aim in this paper is to argue that some duties or other could be subject to some level of time discounting. But I don’t really try to settle exactly which duties these will be, or how severely we should discount them. Probably not severely enough, however, to justify dismissing longtermism out of hand!

      Thanks again — Harry

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading