Notes on

How to Be Perfect: The Correct Answer to Every Moral Question

by Michael Schur

| 9 min read


Virtue Ethics

Aristotle says, in his Nicomachean Ethics, that our goal is eudamonia (to flourish/happiness). That is the end-all-be-all goal: something we want just for the sake of it, and from which everything else flourishes.

To get it, however, we’ll need virtue. This is, as you can image, very hard.

What is virtue?
Simply, it is the good qualities of that thing. The qualities that makes it good at what it does/achieving it’s goals.

You aren’t born virtuous. It’s something you become. You are born with the potential.

Virtue is something you practice regularly. You become virtuous through virtuous actions. It’s a life long journey.

Nature, habit, and teaching. That’s all you need.
You have to be ready. You have to do it consistently. And you have to learn from wise men.

And virtue is something you need in exact quantities, says Aristotle. You don’t want too much of any virtue, nor too little.

The golden mean is when you have the virtues balanced. It’s the exact amount of a given virtue.

The Trolley Problem

Basically, you have a runaway trolley. There are five people on one track, and one person on another. The trolley is currently heading for the five people, but you can change its course, resulting in one death instead of five.

There are many variations of this question. For example, to the above version, it might be easy to say yes. But imagine if you were a doctor. Would you kill one healthy person and harvest their organs to save five people in need of those exact organs? That certainly sounds less appealing.

What would you do if faced with the trolley problem?

Utilitarianism

Utilitarianism comes from Jeremy Bentham and John Stuart Mill. It is a branch of the school of ethical Philosophy called “consequentialism,” in which you only care about the results or consequences of actions.

A consequentialist would aim to do what results in the most good and least bad.
And Bentham defined the most good as what brings most happiness (makes the most people happy), and called it the ‘greatest happiness principle’.

Utilitarianism doesn’t really hold under stress tests.
A doctor can kill one healthy man to save five in need of his organs.
A very happy pig is happier than Socrates’ utility in Athens (he annoyed a lot of people). So if you had to get rid of one, maybe you’d go for the happy pig (given only this information).

Personally, I think the utilitarianism philosophy is hard. The actual calculation is what makes me think so. Do you take into account potential lost/gained happiness?
But this is also a criticism of the philosophy: every time you calculate utility to test utilitarianism and you come to a conclusion that implies poor moral guidelines from it, a utilitarian can just say you did the calculations wrong. You didn’t take the full picture into account (rel: no true scotsman fallacy).

I can see the argument. It depends on the level of granularity from which you examine the situation.

Doing the calculations, taking everything into consideration? It’s probably not possible.
Very often, you don’t know the long term implications of your actions in the moment. How are you supposed to?

Deontology

Or, Kantian ethics, from Immanuel Kant.
Deontology is about duty or obligations.

Kant brought deontology to prominence. We should make rules from pure logic and follow them, as it is our duty.

Your actions are irrelevant: just as long as you follow the rule absolutely. If you do, you are acting morally. If not, you aren’t acting morally.
This brings the control further in your ‘control circle,’ and is seemingly the opposite of utilitarianism (consequentialism), where only the results of your actions matter. No, Kant says that only your adherence to the rules you’ve made matters.

The central idea is that of the Categorical Imperative:

Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.

With categorical imperatives, you want to find rules that you stick to. But it isn’t only you: you have to make rules that everyone can stick to. Universal rules, as Kant called them in the aforementioned snippet.

My initial reaction: Wow. Do you really have to go all Game Theory and build Nash Equilibria for each rule?

There’s another form of categorical imperatives called the practical imperative, which is easier to follow in practice:

Act so that you treat humanity, whether in your own person or in that of another, always as an end and never as a means only.

Basically, don’t use people to get what you want. They are an end, not a means.

What makes Kant interesting to me is his disregard for the result of your actions. As long as you have good intentions (you created a rule that is for everyone to follow, and for that, it probably has to be good overall), the result of you following is what it is. The just act is the dutiful execution of the rule.

Kant actively disregarded happiness as a meaningful pursuit in the imperatives. This is because happiness is subjective, so it cannot be possible to find a universal rule that leads to it. Therefore, it is irrelevant.
This is seemingly opposite to the virtue ethics and utilitarianism (again), both of which fixate on happiness as the highest ideal.

Contractualism

This is by Thomas Michael Scanlon.

Analogy for Contractualism by Hieronymi:

Imagine our crew has been at war with another crew for years, just slugging it out in a dense forest, firing on each other from trenches a hundred feet apart. It’s an absolute stalemate. Neither side has any advantage over the other, and no hope of ever gaining one. Exhausted and weary, we call a temporary truce and decide we somehow need to design and describe a mutually livable society; we need a set of rules that can be accepted by both sides, no matter how wildly different our views are (and we obviously hold very different views, hence the endless trench warfare).


Scanlon’s suggestion: We give everyone on both sides the power to veto every rule, and then we start pitching rules. Assuming everyone is motivated to actually find some rules in the first place—that everyone is reasonable—the rules that pass are the ones no one can reject.

Scanlon says that we are reasonable if, when two people agree, each is willing to constrain/modify their pursuit of their own interests to the same degree that the other person is.
This supposedly helps design a world where we can better coexist, rather than just look out for our own interests.
We change our own demands, such that others will also agree to them.

This seems similar to finding some collective Nash Equilibrium in a contract. Here we focus on finding rules together, rather than the individual doing it on their own (deontology).

Ubuntu

Ubuntu is a South African concept.
It’s about how we live through other people.

Like Rene Descartes’ Cogito Ergo Sum.
“I think, therefore I am” becomes “I am, because we are”
We help each other. That’s how we come to be.

Moral desert

“Moral desert” is the idea that we should get rewarded for our moral behavior.

That, if you do good, you should get a reward for it.

Whataboutism

This is where someone shames someone else for caring about X when Y is far more dire.

Seems like a mashup of Tu Quoque and Red Herring, no?

Why it can be moral to break rules sometimes

As James C. Scott says, it can actually be moral to break the small, trivial rules sometimes. Like jaywalking. Only when it makes sense, though.

I agree here. Who says all rules are perfect? And who says we reach even close to an optimal state by always abiding by them.
Sometimes it can be more just to not follow the rules.
But if nobody ever follows the rules, chaos ensues. It’s a fine balance.

When is it ok to break rules?
It is never okay. We need to realize this. We ARE breaking a rule, and should be mindful of it.
And of course, it shouldn’t harm others. Or put them at a disadvantage.

Moral exhaustion

Trying to do the right thing every time is hard. Really hard.
There’s always some reason what you’re doing isn’t quite enough, slightly misguided, or just immoral due to some weird second hand consequences.

There is a best everything, but finding these every time is hard and exhausting.

Imagine if you took deontology ultra-seriously. You’d have to derive a rule for everything you’d want to do.

Overton windows

Basically everything is within some window which can shift from unthinkable to common.

Like, for example, same sex marriage. It has gone from 110% unacceptable to completely normal.

Moral Opportunity Cost

Moral Opportunity Cost: there may have been opportunities that resulted in more good than the one you chose.

When can we ignore it? When can we not?

Existentialism

There’s Sartre and Camus here. Although, I’m pretty sure I’ve heard that Camus wouldn’t consider himself an existentialist.

Existentialism is about how human existence is absurd.
There isn’t any higher power. No deity. No meaning beyond existence.
We’re just apes on a rock.

And this supposedly fills us with dread and anxeity.
We’re accountable only to ourselves. Remember, no deity watching over us.

The movement’s goal has been to make sense of what we can do in the face of this absurdity.

And this leads us to Albert Camus, who, again, claims that his version of existentialism isn’t existentialism.

He said that humans desire meaning from the universe, but there isn’t one. We’re searching for something we’ll never find.
And this, Camus believes, is how the human condition is fundamentally absurd.

What can we do about the absurdity Camus mentions? He says we have 3 choices:

  1. Kill ourselves. Not really—he just says it’s technically a way to deal with it. Although this is lazy, as it just removes half of the equation—that is, the person in search of meaning.
  2. Find structure and derive meaning from it (religion, family, work, etc.). But he also says that this is philosophical suicidal, as you just remove the other half of the equation—the universe that has no meaning.
  3. Just deal with it. Live within the absurdity. Accept it. This is the right choice (says Camus)

Liked these notes? Join the newsletter.

Get notified whenever I post new notes.