The perils of trying to optimize your morality

This story was originally published in The Highlight, Vox’s member-exclusive magazine. To get early access to member-exclusive stories every month, join the Vox Membership program today. I am a recovering optimizer.  Over the past several years, I’ve spent ag…
Lolita Steuber · 4 months ago · 4 minutes read


The Trouble with Moral Optimization

The allure of optimization

In recent years, many people have found themselves striving to do the best possible thing, not just an okay thing or a good thing.

This "moral optimization" mindset is spreading, especially among social activists who aim to use data and reason to maximize their impact and those who are concerned about living meaningful lives.

The roots of moral optimization

The idea of optimizing morality has a long history, rooted in the development of data science and the Enlightenment.

European intellectuals in the 16th century developed double-entry bookkeeping, which emphasized quantifying and verifying every merchant's activities.

This paved the way for advances in the 1600s and 1700s, including the Age of Reason and the Age of Enlightenment, where thinkers like Francis Bacon and Johannes Kepler believed that the approach used in bookkeeping could be applied to science.

They saw optimization as a godly power, using mathematics to figure out the maximum value that can be achieved, as exemplified by Samuel König's study of beehive architecture.

Soon, people were trying to mathematize everything, including morality.

Irish philosopher Francis Hutcheson coined the classic slogan of utilitarianism, stating that actions should promote "the greatest happiness for the greatest number." He attempted to reduce morality to mathematical formulas.

Utilitarian philosopher Jeremy Bentham sought to create a "felicific calculus" to determine the moral status of actions using math. He believed that actions are moral to the extent that they maximize happiness or pleasure.

As the Industrial Revolution took off, economists like Adam Smith promoted efficiency and profit maximization, leading to consumer capitalism and improved living standards.

Progress in computer technology in the 20th century further fueled the dream of optimal morality, with AI systems now being developed to infuse more rationality into moral decision-making.

How moral optimization is used

Today, many people believe morality can be optimized, as seen in the popularity of "spirit tech" like meditation headsets that aim to enhance enlightenment through neurofeedback.

Effective altruists and rationalists advocate using data and probabilistic thinking to maximize the good their actions produce.

AI is at the forefront of moral optimization challenges, with researchers attempting to program ethical reasoning into AI.

Some even believe AI could outperform humans in ethical reasoning, handing over the world to "homo sapiens 2.0."

The problems with moral optimization

Optimizing morality is problematic because morality is a notoriously contested thing, with different moral theories often contradicting each other.

Different kinds of moral good can also conflict on a fundamental level, such as a woman facing a trade-off between becoming a nun or becoming a mother.

Moral machines face the challenge of teaching a specific moral view, navigating the tension between teaching a majority view and respecting minority views, and accounting for a plurality of moral theories.

Furthermore, emotions are inseparable from morality, motivating our moral behavior and potentially being essential for moral progress.

If we insist on mathematizing morality, we may ignore concepts of good that can't be easily quantified.

Why moral optimization is seductive

Data-driven optimization works well in some domains, such as drug development or flight scheduling, where predictability and objectivity are highly valued.

However, optimization does not work as well when trying to decide on the "optimal" moral response or career pathway.

Feminist philosophers suggest that the claim to objectivity offers us the dream of invulnerability, creating a sense that decisions are not our own and therefore cannot be wrong.

Optimizing makes being human feel less risky, but it means giving up something extravagantly precious: our humanity.

The optimal stopping point for optimization

The idea of an "optimal stopping point" for optimization suggests that it can be counterproductive to spend too much time gathering data for decision-making.

Herbert Simon's concept of "satisficing" involves opting for a "good enough" choice rather than the optimal choice.

This approach is wiser for moral life, allowing for multiple "good enough" options and acknowledging incommensurability among different values.

A new humanism

Instead of optimization culture, a new humanism is needed.

This involves embracing our human condition, acknowledging our limitations, and appreciating the messy, unquantifiable parts of ourselves that enable us to care deeply about others.

When making decisions, it is important to remember that there will be things beyond our control and to extend compassion to ourselves and others.

The fact that moral life cannot be neatly pinned down is a source of freedom and richness, something to be cherished rather than lamented.