• Book Review of What We Owe the Future, by William MacAskill.1

An issue that has long divided scholars is the question of how much weight to give to the interests of future generations, especially when making decisions of significant public importance. On one side of this issue there have been those like the University of Chicago law professor Eric Posner, who essentially argue that future citizens should receive no weight in decisions made today. In a democracy, it is argued, only the interests of currently living, voting citizens are considered. They can, if they so choose, take the welfare of future citizens into account, but it is their right not to do so as well.

On the other hand, there are those, like the late influential philosopher Derek Parfit, who take the view that future citizens should receive as much weight as those alive today. Future generations are not around to express their preferences to us, and we should do our best to accommodate what we think they might want or care about. Moreover, the time period in which a person lives does not dictate their degree of humanity, so granting them anything other than equal weight amounts to an injustice.

Among philosophers today, the Parfitian view appears common. The Oxford philosopher John Broome takes a similar position, arguing we should not “discount the future” in a manner akin to discounting cash flows in accounting. This perspective remains rare among economists, however, who tend to hold views much closer to the Posner position, although not always for the same reasons.

Economists tend to view discounting in social policy as a natural extension of the necessity to discount in financial analysis. They are taught to respect individuals’ preferences, too. Absent market failures, these preferences should result in rationally self-interested behavior that leads to an efficient allocation of resources, according to standard textbook treatments. Therefore, because many people exhibit a natural time preference, analysts should incorporate those preferences into their analysis.

It is under this backdrop of disagreement that Oxford philosopher Will MacAskill has released an important book, What We Owe the Future, which makes an unabashed case for “longtermism.” Longtermism is a philosophy that advocates taking an extremely long-run view on ethical questions and for placing great weight on the interests of future people. Aside from being a professor, MacAskill is a leader of the Effective Altruism movement, which argues for evidence-based philanthropy, and is also a co-founder of the nonprofit 80,000 Hours, which offers advice to young people about how to best use their careers for good.

What We Owe the Future is the clearest, best-argued case for why we should care about future generations yet made. Already, it is making waves in the media, and its message is likely to resonate with the brightest young people—those who yearn to make a difference with their lives.

We should take this book very, very seriously. At the same time, there is lots to disagree with in it, especially for those who value individual liberty and free markets. MacAskill is an advocate for caution. He argues we should be spending much more time—and presumably money—trying to make new technologies safer. In that sense he is an advocate, not of progress, but for slowing down progress in the name of safety. This is especially true with respect to artificial intelligence, but AI is far from the only technology he is concerned about. And while he seems to support deregulation in some areas, for example by allowing human challenge trials to test new medicines, I suspect he would like to see much more regulation of other technologies.

Even so, MacAskill’s new book is a breath of fresh air. It does not feel arcane or overly academic, and it does not get mired in technical debates, like those typically found in academic discussions of discounting. Rather than pick fights with economists, he smartly sticks to philosophy, which is what he does best. This is the kind of book that has the potential to change people’s minds, even if it doesn’t have that impact on everyone.

What We Owe the Future begins by laying out the philosophical case for treating future citizens equally to ourselves. MacAskill recites a famous example from Parfit about a girl walking in the forest who cuts her foot on a piece of glass. Should it matter when this event happens—today or 100 years from now—if the pain experienced is the same? MacAskill believes, like Parfit, that the timing of the event is irrelevant from a moral point of view.

MacAskill goes on to explain how value systems in society can become locked in, and how this can work in the direction of either good or evil. Slavery was a horrific institution that was accepted for many thousands of years. Even some of the greatest minds in human history accepted the institution of slavery, highlighting the hold that deeply engrained value systems have on our thinking. MacAskill credits the abolitionist movement with bending society’s values towards justice, demonstrating that moral progress is indeed possible in this world, even if it is very hard to achieve.

MacAskill argues that the arrival of artificial general intelligence (AGI)—technology that will enable machines to perform tasks as well or better than humans—will create a scenario whereby there is another potential for a long-run lock-in of values. Whoever designs AGI at its inception will determine how the technology responds in situations with important ethical implications. Once these machines are let out into the world, it will be very hard to contain or change them since, by definition, they will be smarter than most people. Experts disagree on when AGI will arrive, but some believe it could be as soon as the next few decades or even the next few years. If this is the case, MacAskill believes we live at a particularly important moment in human history.

Much of What We Owe the Future centers around catastrophic risks, an issue that receives considerable attention in the longtermist community. Risks associated with AGI, runaway global warming, asteroid collisions, pandemics (including from the use of biological weapons), and nuclear war are explored at length. These are risks that could cause catastrophes that plausibly lead to the annihilation of the human race or to a permanent return to pre-industrial standards of living. MacAskill argues for taking a careful, analytic approach to potentially dangerous new technologies, in some cases delaying their use and implementation for extended periods of time or indefinitely until technologies are understood well enough so they can be controlled.

The book also includes a lengthy section on animal rights, including a fascinating discussion of ways to account for animal wellbeing in a utilitarian framework. There is a chapter on population ethics, which involves questions about the optimal human population size in a society. It is perhaps the best introduction to this topic so far written.

Unanswered questions

MacAskill makes a lucid and persuasive case for longtermism. Where the book could have been stronger is with respect to practical application. The ethics of longtermism are clear and—to be honest—fairly anodyne. Making the philosophical case for discriminating against future generations, or indeed any class of people, would have been harder. In fact, anyone who even vaguely believes in equality could be forgiven for walking away from this book thinking they are a longtermist. But this may not actually be the case.

MacAskill himself lives an extremely ascetic life, giving much of his money to charity and various causes. As a utilitarian, he apparently believes this is consistent with increasing wellbeing in society. I give him credit for doing what he thinks is right, but his lifestyle is also very much outside the mainstream of societal norms. It’s much easier to preach longtermism than it is to practice it, and even if MacAskill has the psychological wherewithal to live this way, most people will not.

In fact, the demands a longtermist philosophy puts on society are one of the primary reasons relatively few economists hold the view that all generations should be treated equally, for example in a cost-benefit analysis. The economist Kenneth Arrow is probably most famous for this view, having argued that fairer treatment of future generations could require investing two-thirds or more of current national income, which seems devilishly high to many people.

For this reason, it’s not even obvious that giving away most of one’s money to charity is the right approach under a longtermist view. Many of the charities associated with MacAskill and the effective altruism movement, quite admirably, give resources to the poor in developing countries. This is consistent with a utilitarianism that emphasizes present wellbeing, but it is probably not consistent with longtermism.

“As compelling as it is to provide malaria bed nets to poor children in Africa, investing most of one’s spare income in financial markets may technically be the more longtermist approach.”

As compelling as it is to provide malaria bed nets to poor children in Africa, investing most of one’s spare income in financial markets may technically be the more longtermist approach. The accumulation of wealth will translate into higher living standards for later generations, and probably a more technologically advanced civilization as well. Thus, an ascetic lifestyle may still be warranted if one is to be a longtermist, but the aim of sacrificing is not to help those barely living at subsistence level today. Instead, it becomes to accumulate wealth to leave behind after we are gone.

To be fair, the 80,000 Hour organization MacAskill is involved in does endorse a philosophy called “earning to give,” which involves spending the early part of one’s career making a lot of money and then donating most of one’s wealth to philanthropic causes later in life. But even if one waits to give away their money, there will still be the opportunity cost of giving it away in that one likely foregoes even higher returns in financial markets. There may be ways to combine the two goals of helping the poor and making society richer, but very often they will be at odds with one another.

Despite these philosophical inconsistencies, few would have a problem with MacAskill’s charity. Large parts of his giving just don’t seem to be motivated by longtermism and may even be at odds with it. His precautionary approach to dealing with catastrophic risks is more philosophically consistent, but still problematic in some ways. He is willing to wait extremely long periods of time—potentially sacrificing opportunities and wellbeing for many generations of individuals—so that society can get a handle on potentially dangerous technologies. Perhaps such an approach could prevent widespread catastrophe, but how does one go about deciding which risks to focus on?

Philosophers like Nick Bostrom have come up with arguments for how paperclips could destroy the world through out-of-control artificial intelligence. The example is meant to be illustrative, but it highlights how there is almost no technology that is truly safe. Moreover, some dangerous technologies can be used for good. Should the invention of nuclear weapons have been delayed if the alternative had been the allies losing World War II? And what about imagined risks that aren’t real, such as those associated with genetically modified foods or vaccines? Blocking these technologies would entail substantial costs to humanity for little or no benefit.

Another problem we face in society today is policy paralysis, as evidenced by our inability to build infrastructure, as well as a slowdown in global innovation. What society needs more than anything now is a call to action, not a call for more deliberating. MacAskill runs the risk of providing intellectual firepower for further complacency and stagnation. The question of resilience is also barely mentioned in the book. As Nassim Taleb explained in his book Anti-Fragile: Things That Gain from Disorder, a culture that grows too accustomed to avoiding risk may never develop the skills needed to cope with it once it eventually arises.


For more on these topics, see

“What We Owe the Future” is an outstanding achievement. Anyone interested in questions of intergenerational justice, existential risks, artificial intelligence, and animal rights should read it. Indeed, the book runs the gamut as far as cutting-edge philosophical questions are concerned. It’s a fascinating introduction to these topics, and I suspect many readers will find MacAskill’s answers not only convincing, but inspirational.

However, there remain reasons to be skeptical that the philosophical system advocated for in this book is the best prescription for society to follow. The book sometimes reads like promotional material to lure smart, ambitious young people into the longtermist movement. It may even succeed in doing so. But I wish MacAskill were more straightforward with these readers about the sacrifices his philosophical system entails. Whether we can meet the high standard he sets for us is an open question. On the other hand, for the longtermist, we have plenty of time to wait for the answer.


Parfit, Derek. Reasons and Persons, Oxford University Press, 1984.

Posner, Eric A. “Agencies Should Ignore Distant-Future Generations,” University of Chicago Law School, Chicago Unbound. Originally published in The University of Chicago Law Review, Vol. 74, No. 1, Symposium: Intergenerational Equity and Discounting (Winter, 2007), pp. 139-143.


[1] William MacAskill, What We Owe the Future.

* James Broughel is

As an Amazon Associate, Econlib earns from qualifying purchases.