by Shayne Walton
Utilitarianism is a concept brought up by philosophers named John Stuart Mill and Jeremy Bentham in the 1700s and 1800s. The basis of this philosophy is to promote the greatest good for the greatest number of people. These beliefs are such because the philosophers believed that every action and reaction is taken based off of pain and pleasure. The thought process is as such: if you touch a hot pan after it comes out of the oven and burn yourself, you’ll know to not to that another time since you don’t want to be in pain. Or, comparatively, if you win a debate round and you are super happy after it, you’ll want to maximize those feelings and win more debate rounds. This is where the “minimizing pain” and “maximizing pleasure” value criterions/standards come into play.
Consequences are the means of evaluating morality under this framework. This means that the intentions behind an action aren’t the main focus, but rather the result of those intentions. For example, if you intend to lie, but that lie saves a life, it’s net beneficial to society and thus deemed okay under utilitarianism.
A common example brought up by philosophers is one known as “The Trolley Problem.” This is an issue illustrated by a trolley going down a track. The trolley is continuing forward on the track, but with an easy pull of a lever, it can go off to a separate track. The track that the trolley is heading towards has five people tied down onto it and they are not able to get up. The other track that the trolley can be switched to only has one person tied down. The utilitarian would pull the lever to switch the track. Because of the utilitarian belief in the greatest good for the greatest number, this means that saving more lives is preferable to not acting.
Utilitarians believe that there is NO “act-omission distinction.” The act-omission distinction is a moral belief that if you act and cause an action there is a different moral culpability than if you were to not act (omit) and cause an action. For example, if you push a person into a pond and cause them to drown or if you were to let someone drown, a utilitarian would say that there is no difference between the two end results, so you are morally culpable (morally responsible) for both impacts. Opponents to utilitarianism would say that the act-omission distinction exists and that’s one reason as to why they would not pull the lever in the trolley example. To them, it’s worse to act and cause someone to die rather than not acting and letting 5 people die.
Now, the concept of utilitarianism may seem pretty easy, but when you are debating it, there is one big concept that many people do not keep in mind: WEIGHING. Don’t worry, weighing is not like whomever weighs more pounds wins the debate, but rather it’s seeing which arguments are more important. Or, which arguments the judge should give the most “weight” to in the round.
As defined by Merriam Webster, probability is defined as the chance that something will happen. This is one way that you can determine which impacts are more important in the round. There are times where you will be comparing impacts such as stopping 50,000 people from dying each year and stopping extinction. Although 50,000 people is less than the entire human race going extinct, that doesn’t mean that it’s less important. This is where you would want to have good evidence with small link chains.
There are also weighing arguments for probability comes before anything else:
1) If governments were to act based on only magnitude and not probability, there would be total paralysis. Every action has a possibility of causing extinction through some long link chain, so the government would never be able to get anything done. Thus, governments should solve the problems that they know that they can fix first.
2) The threat of big impacts is usually exaggerated. Authors frequently exaggerate their impacts to make the issue they’re speaking about seem like a more important. People psychologically exaggerate impacts that they’d like to have seem more important. This means that authors can make up larger numbers for impacts (in a non-malicious way, hopefully) so that they are considered more important by the public. This means we should look to the smaller, more realistic impacts and use a gut-check to check back against possible exaggerations.
Magnitude is the number of people that an impact hurts or helps. Under this ideology, the extinction impacts from the above example would always come first because that kills every person in the whole world. The impacts where only 50,000 people would die wouldn’t be as important.
There are some weighing arguments for why magnitude comes before everything else. One example is:
Some impacts warrant extra attention. Many people would say to just ignore these impacts because they seem too large, but they’re actually the true costs of taking an action. This is how the government must act. For example, we shouldn’t just attempt to launch bombs at North Korea because there is too large of a risk for magnitude in the retaliation strike by North Korea.
3 Comments
The second point under “Probability” is incomplete- “People psychologically…”
Debate for all, ev for all: https://www.dropbox.com/sh/db5buv1wlzoayh4/AADr_FqLSgqkYYXfbwfiulqIa?dl=0
Utilitarianism, by John Stuart Mill, is actually a fairly easy read and debaters would really benefit if they would engage with the philosophical foundations of their frameworks rather than just pull a few cards from backfiles or get a card from their coach. Here’s the whole book in a PDF: http://socserv.mcmaster.ca/econ/ugcm/3ll3/mill/utilitarianism.pdf
Fixed! Thanks for pointing that out, Muni.