Publications
Peer-reviewed journal articles:
Publications
Peer-reviewed journal articles:
'Empathetic Large Language Models, the Social Capacities, and Human Flourishing', Inquiry (2025) online first. With Avigail Ferdman.
Abstract: Large Language Models (LLMs) are capable of fluent human-like conversations and are increasingly emulating the human trait of empathy. Consequently, people are turning to LLMs for companionship, with interest in friendships and even romantic relationships with AI on the rise. This paper assesses the goodness of users' relationships with these empathetic LLMs under an alternative framework that takes human flourishing as its main normative concern, combining perfectionism—an influential philosophical approach to human flourishing—with an analytic examination of LLM environments. While Empathetic LLMs hold some promise, we argue that certain properties of LLMs and the way that these models are currently being used in the form of AI companions are likely to impoverish the users' social capacities conducive to human flourishing. Moreover, the proliferation of AI companions could lead to people further isolating themselves from people who could provide genuine friendships. We end by making some suggestions on the kinds of features empathetic LLMs should have to ensure these models are used as tools to practice social interactions and refine communication techniques that can ultimately be applied outside the virtual space.
Abstract: We are typically near-future biased, prioritising our present and near-future interests over our own distant-future interests. This bias can be directed at others as well, prioritising their present and near-future interests over their distant-future interests. I argue that, given these biases, and given a plausible limit on the extent to which we can permissibly prioritise our present interests over the present interests of strangers, we are morally required to prioritise the present interests of strangers over our distant-future interests. I also argue that a similar conclusion holds even if we are near biased only towards ourselves, and regardless of whether this bias is rational. And I show that my conclusions have interesting implications for the ethics of charitable giving, because they generate moral pressure to donate to charity those funds that would otherwise have gone into our long-term savings.
'Supererogation, Suberogation, and Maximizing Expected Choiceworthiness', Canadian Journal of Philosophy (2024), 53, pp. 418-432.
Abstract: Recently, several philosophers have argued that, when faced with moral uncertainty, we ought to choose the option with the maximal expected choiceworthiness (MEC). This view has been challenged on the grounds that it is implausibly demanding. In response, those who endorse MEC have argued that we should take into account the all-things-considered choiceworthiness of our options. I argue that this gives rise to another problem: acts that we consider to be supererogatory are rendered impermissible, and acts that we consider to be suberogatory are rendered obligatory, under MEC. I suggest a way to reformulate MEC to solve this problem.
Abstract: Most people have the intuition that, when we can save the lives of either a few people in one group or many people in another group, and all other things are equal, we ought to save the group with the most people. However, several philosophers have argued against this intuition, most famously John Taurek, in his article ‘Should the Numbers Count?’ They argue that there is no moral obligation to save the greater number, and that we are permitted to save either the many or the few. I argue in this article that, even if we are almost completely persuaded by these ‘numbers sceptics’, we ought not to just save the few. If the choice is simply between saving the many or the few, we ought to save the many.