This is the aff that I prepped for round 1 of Harvard. I skipped the phenomenal introspection warrant for util in that debate since I had to go slightly slower than my normal speed for the judge (he said I needed to go a bit slower because it was early in the morning)
One of the great things about util affs is that how you set them up can be very flexible. That said, the framework here was longer because I expected to debate someone who would be unlikely to concede to it. If you’re uncertain about the stylistic preferences of your opponent I’d recommend a slightly shorter framework with more of a balance between contention level and framework. While there’s only one advantage here, read more than one if you expect to hit someone agreeing to util or more turns than normal
Other things that are important to think about for this aff:
Why it has minimal spikes: I like developing substance really well. The spikes that I did put in here are what have been what I think are essential/useful, but I can see myself reading more than these if I’m expecting a theory debate (also T preempts).
Try or die: If they concede the uniqueness evidence, an argument you can put on the line by line among other things is that only the aff has a risk of solving extinction when the status quo has zero chance of solving. You don’t want to hang the entire debate on this one argument, though.
Using AC contention level to turn NCs: The advantage is about income inequality. It’s strategic to think about how that can turn NCs. That means that you have an out in case the neg wins the framework debate.
Protecting against impact turns: If you’re expecting impact turns/feel like pre-empting them for kicks, I recommend reading one or two additional cards following the initial impact card you’ve read that interact with common answers to your impact. For example, with an impact of economic growth I’ve sometimes included evidence both about nuclear war but also “growth solves the environment” because most of the impact turns to econ are environmental.
Where art thou, AFC?: I like substance, so I don’t really run AFC/parameters-type args. I’m not entirely opposed to them, though.
Plan: The United States federal government ought to raise the federal minimum wage to $15/hour. I reserve the right to clarify. Seattle is normal means.
[The purpose of reading “I reserve the right to clarify” is to get out of silly indicts to the aff that are either based on ambiguities of the plan text or a blatant misunderstanding of the aff.]
Advantage 1 is Inequality
The US economy is fragile now despite low unemployment. Counties prove, and they’re the best indicators of growth
Soergel 1-19 writes[1]
Though 2014 brought record-breaking stock performances, unemployment that was the lowest since 2008 and the best year for job growth since the turn of the millennium, the rising economic tide has not lifted all ships as some U.S. counties are still floundering, according to a report released Monday by the National Association of Counties. A county-by-county breakdown of local economic performance was issued Monday in the association’s 2014 County Economic Tracker. The study spans all 3,069 U.S. counties and suggests the majority of local economies have not fully returned to pre-recession stability. “The U.S. economy doesn’t happen at the abstract macroeconomic level. It happens on the ground, where businesses are located and where Americans live and work,” says Emilia Istrate, the association’s director of research. “County economies are building blocks of the regional economies, state economies and national economy.” The association’s report, based on data obtained from Moody’s Investors Service, breaks down local economic performance into four major categories: gross domestic product, employment totals, unemployment rates and home pricing. The findings for 2014 were compared to pre-recession figures to get a feel for which local economies have recovered best since the Great Recession. “The national numbers show growth continued in 2014, but it still remains fragile and sluggish in different parts of the country,” Istrate says. “On the positive side, we find out that 72 percent of county economies recovered on at least one of the indicators we analyzed.”
[The above evidence is uniqueness. It proves that growth is weak, because if growth were strong then there’d be no reason to do the aff in order to avoid a large impact to econ decline. This card specifically is strategic because it says that growth is weak despite low unemployment, so neg unemployment cards don’t link turn econ growth. It also says that counties are the best indicators of growth; if neg evidence saying “growth high now” isn’t about counties, you have built-in weighing against that]
$15/hour reduces income inequality. Income inequality decreases consumer spending, killing the entire US economy
Sanders 2-5 writes[2]
Most significantly, the simple truth of the matter is the 40-year decline of the American middle class continues. Real unemployment is not 5.6 percent – including those people who have given up looking for work or people who are working part time when they want to work full time – it is over 11 percent. Youth unemployment – something we almost never talk about in this country – is a horrendous 17 percent, and African-American youth unemployment is over 30 percent. It is totally unacceptable. Real median family income has declined by nearly $5,000 since 1999. All over this country – in Vermont and in every other State in this country – we have people working longer hours for lower wages. We have husbands and wives working 50, 60 hours a week just to pay the bills. Incredibly, despite huge increases in productivity, in technology, and all of the global economy we hear so much about, the median male worker now earns $783 less than he did 42 years ago. Let me repeat that. That American male worker right in the middle of the economy now earns, after inflation adjusted for wages, $783 less than he did 42 years ago. The female worker right in the middle of the economy now makes $1,300 less than she made in 2007. When you ask why people are angry, why people are stressed, why people are frustrated, that is exactly why. Further, this country continues to have, shamefully, the highest rate of childhood poverty of any major country on Earth, and 40 million Americans still have zero health insurance. In the midst of this tragic decline of the American middle class, there is, however, another reality. The wealthiest people and the largest corporations are doing phenomenally well. The result: The United States today has more income and wealth inequality than at any time since the Great Depression. Today the top one-tenth of 1 percent own almost as much wealth as the bottom 90 percent. Let me repeat that because that truly is a startling fact. Today the top one-tenth of 1 percent – which is what this chart talks about – owns almost as much wealth as the bottom 90 percent. Today one family – the Walton family, owners of Walmart – owns more wealth than the bottom 40 percent of the American people, some 120 million Americans. I don’t believe most of our people think this is what the American economy should be about. In fact, this is not an economy for a democracy. This is what oligarchy is all about. One-tenth of 1 percent owning almost as much wealth as the bottom 90 percent, 1 family owning the equivalent of what 131 million Americans own, that is wealth. In terms of income – which is what we make every year – what we have seen in the last number of years since the Wall Street crash is virtually all new income is going to the top 1 percent. Last year – just as one example – the top 25 hedge fund managers earned more income than 425,000 public school teachers. Does anybody believe that makes sense? Twenty-five hedge fund managers making more income than 425,000 public school teachers. That gap between the very rich and everybody else is growing wider and wider and wider. The fact is that over the past 40 years, we have witnessed an enormous transfer of wealth from the middle class to the top 1 percent. In other words, what we are seeing in our economy is the Robin Hood principle in reverse. We are taking from the poor and the working families and transferring that income and wealth to the very wealthy. From 1985 to 2013 the share of the nation’s wealth going to the middle class has gone down from 36 percent to less than 23 percent. If the middle class had simply maintained the same share of our nation’s wealth as it did 30 years ago, it would have $10.27 trillion more in cumulative wealth than it does today. Almost $11 trillion would have stayed with the middle class but has disappeared since 1985. But while the middle class continues to shrink, while millions of Americans are working longer hours for low wages, while young people cannot afford to go to college or leave school deeply in debt, while too many kids in this country go hungry, we have seen, since 2009, that the top 1 percent has experienced an $11.5 trillion increase in its wealth. So the top 1 percent in recent years sees an $11.5 trillion increase in wealth, while in roughly the same period the middle class sees a $10.7 trillion decrease in wealth. This $11.5 trillion transfer of wealth from the middle class to the top 1 percent over a five-year period is one of the largest such transfers of wealth in our country’s history. Here is my point. This is not just a moral issue, although it is a profound moral issue – and Pope Francis, by the way, deserves a lot of credit for talking about this issue all over the world. Are we satisfied as a nation when so few have so much and so many have so little? Are we satisfied with the proliferation of millionaires and billionaires, at the same time as we have millions of children living in poverty? Is that what America is supposed to be about? That is the moral component of this debate. But this is not just a moral issue. It is also a fundamental economic issue. As we know, 70 percent of our economy is based on consumer spending. When working people do not have enough income, enough disposable income, they are unable to go out and buy goods and services that they would like or that they need. The so-called job creators that my Republican friends often refer to are not the CEOs of the large corporations. The CEOs of large corporations cannot sell their products or services unless people have the income to buy them. Someone can come up with the greatest product in the world, but if people do not have the money, they are not going to sell that product, they are not going to hire workers to produce that product. The truth is that the real job creators in this country are those millions of people who every single day go out and purchase goods and services, but if they do not have adequate income, the entire economy suffers. There was a very interesting article in the Wall Street Journal, written by Nick Timiraos and Kris Hudson, talking about how a two-tier economy is reshaping the U.S. marketplace. What they talk about is: It is a tale of two economies. Said Glenn Kelman, chief executive of Redfin, a real estate brokerage in Seattle, “There is a high-end market that is absolutely booming. And then there’s everyone in the middle class. They don’t have much hope of wage growth.” The article continues. Indeed, such midtier retailers as J.C. Penney, Sears and Target have slumped. “The consumer has not bounced back with the confidence we were looking for,” Macy’s chief executive Terry Lundgren told investors last fall. So what we are hearing – basically what this article tells us – is if people’s income is going down, they are not going to Macy’s, they are not going to Target. Those stores are not hiring workers or are getting rid of workers because the middle class does not have the income it needs. Here is a very important point. Within President Obama’s recent budget – by the way, I think the President’s budget is beginning to move us in the right direction – there was a very interesting projection that unfortunately got very little attention. Here is the point: Over the last 50 years GDP growth in the United States of America averaged about 3.2 percent. What the President’s budget is suggesting is that more or less over the next 10 years we are going to see 3 percent growth, 2.7, 2.5, 2.3. For the rest of the decade, 2.3 percent. The bottom line is, if we continue along the same type of economic growth we have had over the previous 50 years, unemployment would be substantially lower, people would be paying more taxes, Social Security, among other programs, would be in much stronger shape. The debate we are going to be having in the Budget Committee – I am the ranking member of the Budget Committee – are two very different philosophies. Our Republican friends believe in more austerity for the middle class and working families. Their goal, over a period of months and years, is to cut Social Security, cut Medicare, cut Medicaid, cut nutrition programs for hungry children, not invest in infrastructure, and then give huge tax breaks for millionaires and billionaires. In other words, more austerity for the middle class, tax breaks for the wealthy and large corporations. I believe that philosophy is wrong for many reasons, the most important being that if we want to grow the overall economy, if we want to create jobs, we have to put money into the hands of working people. We do not do that by cutting, cutting, cutting, and imposing more austerity on people who already desperately are hurting. A far more sensible approach is to create the millions of jobs that our country desperately needs by, among other things, investing heavily in our crumbling infrastructure. Last week I introduced legislation that would invest $1 trillion over a 5-year period into rebuilding our crumbling roads and bridges, rail, airports, water systems, wastewater plants. If we do that, we make our country more productive, safer, and create up to 13 million jobs, putting money into the hands of working people. It not only will improve their lives, but they will then go out and spend their money in their communities, creating further economic growth. That is the direction we should be going. We also have to raise wages. People cannot survive on the starvation minimum wage imposed at the Federal level of $7.25 an hour. If we raise the minimum wage over a period of years to $15 an hour, we are going to have billions of dollars go into the hands of people who need it the most, improve their lives, allow them to go out and invest in our economy, spend money and create jobs.
[While this card isn’t that great on why the aff solves income inequality, that can be compensated for by both strong analytic explanation in CX (“my evidence says that more wealth is going to the top 1% rather than low-wage earners; a living wage puts more money in the hands of the poor which reverses that trend”) and extension evidence in the 1AR (meaning that you read evidence that better says the initial argument you were supporting and also responds to neg responses). The great thing about this card is it makes a very strong claim about how consumer spending affects the US economy and that the aff solves. You can potentially say that you don’t have to win the larger inequality question, just that more consumer spending results in the aff world, in order to access the growth impact.]
Income inequality is the root cause of economic crisis.
Harkinson 11 writes[3]
Corporate chieftains often claim that fixing the US economy requires signing new free trade deals, lowering government debt, and attracting lots of foreign investment. But a major new study has found that those things matter less than an economic driver that CEOs hate talking about: equality. “Countries where income was more equally distributed tended to have longer growth spells,” says economist Andrew Berg, whose study appears in the current issue of Finance & Development, the quarterly magazine of the International Monetary Fund. Comparing six major economic variables across the world’s economies, Berg found that equality of incomes was the most important factor in preventing a major downturn. (See top chart.) Andrew Berg & Jonathan Ostry Andrew Berg & Jonathan OstryAndrew Berg & Jonathan Ostry In their study, Berg and coauthor Jonathan Ostry were less interested in looking at how to spark economic growth than how to sustain it. “Getting growth going is not that difficult; it’s keeping it going that is hard,” Berg explains. For example, the bailouts and stimulus pulled the US economy out of recession but haven’t been enough to fuel a steady recovery. Berg’s research suggests that sky-high income inequality in the United States could be partly to blame. So how important is equality? According to the study, making an economy’s income distribution 10 percent more equitable prolongs its typical growth spell by 50 percent. In one case study, Berg looked at Latin America, which is historically much more economically stratified than emerging Asia and also has shorter periods of growth. He found that closing half of the inequality gap between Latin America and Asia would more than double the expected length of Latin America’s growth spells. Increasing income inequality has the opposite effect: “We find that more inequality lowers growth,” Berg says. (See bottom chart.) Berg and Ostry aren’t the first economists to suggest that income inequality can torpedo the economy. Marriner Eccles, the Depression-era chairman of the Federal Reserve (and an architect of the New Deal), blamed the Great Crash on the nation’s wealth gap. “A giant suction pump had by 1929-1930 drawn into a few hands an increasing portion of currently produced wealth,” Eccles recalled in his memoirs. “In consequence, as in a poker game where the chips were concentrated in fewer and fewer hands, the other fellows could stay in the game only by borrowing. When the credit ran out, the game stopped.” Many economists believe a similar process has unfolded over the past decade. Median wages grew too little over the past 30 years to drive the kind of spending necessary to sustain the consumer economy. Instead, increasingly exotic forms of credit filled the gap, as the wealthy offered the middle class alluring credit card deals and variable-interest subprime loans. This allowed rich investors to keep making money and everyone else to feel like they were keeping up—until the whole system imploded. Income inequality has other economic downsides. Research suggests that unequal societies have a harder time getting their citizens to support government spending because they believe that it will only benefit elites. A population where many lack access to health care, education, and bank loans can’t contribute as much to the economy. And, of course, income inequality goes hand-in-hand with crippling political instability, as we’ve seen during the Arab Spring in Tunisia, Egypt, and Libya. History shows that “sustainable reforms are only possible when the benefits are widely shared,” Berg says. “We hope that we don’t have to relearn that the hard way.”
[The above card makes the advantage appear a lot more plausible than it otherwise would be. Income inequality on its own doesn’t seem to trigger the extinction impact, but with great evidence saying that this has been the root cause of the Great Depression, the Great Recession, etc., you can certainly outweigh alt causes.
The above card is also great for weighing against neg arguments about how other things turn the case, such as higher unemployment risking econ decline. While I think there’s a good argument to be made that unemployment turns income inequality, I also think it’s still worth saying that you outweigh unemployment and also just winning the “no unemployment” debate.]
Econ decline causes extinction. Harris and Burrows 9 writes[4]
Increased Potential for Global Conflict Of course, the report encompasses more than economics and indeed believes the future is likely to be the result of a number of intersecting and interlocking forces. With so many possible permutations of outcomes, each with ample Revisiting the Future opportunity for unintended consequences, there is a growing sense of insecurity. Even so, history may be more instructive than ever. While we continue to believe that the Great Depression is not likely to be repeated, the lessons to be drawn from that period include the harmful effects on fledgling democracies and multiethnic societies (think Central Europe in 1920s and 1930s) and on the sustainability of multilateral institutions (think League of Nations in the same period). There is no reason to think that this would not be true in the twenty-first as much as in the twentieth century. For that reason, the ways in which the potential for greater conflict could grow would seem to be even more apt in a constantly volatile economic environment as they would be if change would be steadier. In surveying those risks, the report stressed the likelihood that terrorism and nonproliferation will remain priorities even as resource issues move up on the international agenda. Terrorism’s appeal will decline if economic growth continues in the Middle East and youth unemployment is reduced. For those terrorist groups that remain active in 2025, however, the diffusion of technologies and scientific knowledge will place some of the world’s most dangerous capabilities within their reach. Terrorist groups in 2025 will likely be a combination of descendants of long established groups_inheriting organizational structures, command and control processes, and training procedures necessary to conduct sophisticated attacks_and newly emergent collections of the angry and disenfranchised that become self-radicalized, particularly in the absence of economic outlets that would become narrower in an economic downturn. The most dangerous casualty of any economically-induced drawdown of U.S. military presence would almost certainly be the Middle East. Although Iran’s acquisition of nuclear weapons is not inevitable, worries about a nuclear-armed Iran could lead states in the region to develop new security arrangements with external powers, acquire additional weapons, and consider pursuing their own nuclear ambitions. It is not clear that the type of stable deterrent relationship that existed between the great powers for most of the Cold War would emerge naturally in the Middle East with a nuclear Iran. Episodes of low intensity conflict and terrorism taking place under a nuclear umbrella could lead to an unintended escalation and broader conflict if clear red lines between those states involved are not well established. The close proximity of potential nuclear rivals combined with underdeveloped surveillance capabilities and mobile dual-capable Iranian missile systems also will produce inherent difficulties in achieving reliable indications and warning of an impending nuclear attack. The lack of strategic depth in neighboring states like Israel, short warning and missile flight times, and uncertainty of Iranian intentions may place more focus on preemption rather than defense, potentially leading to escalating crises. 36 Types of conflict that the world continues to experience, such as over resources, could reemerge, particularly if protectionism grows and there is a resort to neo-mercantilist practices. Perceptions of renewed energy scarcity will drive countries to take actions to assure their future access to energy supplies. In the worst case, this could result in interstate conflicts if government leaders deem assured access to energy resources, for example, to be essential for maintaining domestic stability and the survival of their regime. Even actions short of war, however, will have important geopolitical implications. Maritime security concerns are providing a rationale for naval buildups and modernization efforts, such as China’s and India’s development of blue water naval capabilities. If the fiscal stimulus focus for these countries indeed turns inward, one of the most obvious funding targets may be military. Buildup of regional naval capabilities could lead to increased tensions, rivalries, and counterbalancing moves, but it also will create opportunities for multinational cooperation in protecting critical sea lanes. With water also becoming scarcer in Asia and the Middle East, cooperation to manage changing water resources is likely to be increasingly difficult both within and between states in a more dog-eat-dog world.
[The above evidence lists off multiple scenarios for nuclear war to arise as a result of decline, giving you significant leeway for comparison against impact defense. Although the evidence does not explicitly say “extinction”, I don’t think that’s much to worry about. “Nuclear war causes extinction” is a plausible assumption that most people in debate won’t contest. If they do contest it, read evidence in the 1AR that justifies it.]
UBI can’t solve
The Economist 13 writes[5]
Whatever else they say about a basic income, everyone seems to assume that it would decrease income inequality. But those who support the proposal as an egalitarian salve should think twice. Raising the floor for all by adopting an annual UBI would make no dent in the wealth gap. Everybody from a homeless person to a middle-class teacher to a hedge-fund billionaire would receive the same cheque from the government. While the extra thousands would make the most difference to those on the bottom of the pile, the cash would be in lieu of all existing welfare benefits. And the income would not be sufficient to launch most of the poor into the lower middle class. Even if the income could bring a family of four above the $23,550 poverty line—a figure that would cost trillions—it would still leave many Americans in effective destitution, particularly those living in expensive urban centres like New York City where the average monthly rent is now $3,000. Compounding the problem would be upward pressure on housing prices that a UBI may spur.
[The above evidence was to preempt a counterplan that my Harvard round 1 opponent told me they had a good chance of reading. Pre-empting UBI generally was useful since it was a popular CP on Jan-Feb.]
Seattle’s model means no job loss
Reich 14 writes[6]
By raising its minimum wage to $15, Seattle is leading a long-overdue movement toward a living wage. Most minimum wage workers aren’t teenagers these days. They’re major breadwinners who need a higher minimum wage in order to keep their families out of poverty. Across America, the ranks of the working poor are growing. While low-paying industries such as retail and food preparation accounted for 22 percent of the jobs lost in the Great Recession, they’ve generated 44 percent of the jobs added since then, according to a recent report from the National Employment Law Project. Last February, the Congressional Budget Office estimated that raising the national minimum wage from $7.25 to $10.10 would lift 900,000 people out of poverty. Seattle estimates almost a fourth of its workers now earn below $15 an hour. That translates into about $31,000 a year for a full-time worker. In a high-cost city like Seattle, that’s barely enough to support a family. The gains from a higher minimum wage extend beyond those who receive it. More money in the pockets of low-wage workers means more sales, especially in the locales they live in – which in turn creates faster growth and more jobs. A major reason the current economic recovery is anemic is that so many Americans lack the purchasing power to get the economy moving again. With a higher minimum wage, moreover, we’d all end up paying less for Medicaid, food stamps and other assistance the working poor now need in order to have a minimally decent standard of living. Some worry about job losses accompanying a higher minimum wage. I wouldn’t advise any place to raise its minimum wage immediately from the current federal minimum of $7.25 an hour to $15. That would be too big a leap all at once. Employers – especially small ones – need time to adapt. But this isn’t what Seattle is doing. It’s raising its minimum from $9.32 (Washington State’s current statewide minimum) to $15 incrementally over several years. Large employers (with over 500 workers) that don’t offer employer-sponsored health insurance have three years to comply; those that offer health insurance have four; smaller employers, up to seven. (That may be too long a phase-in.) My guess is Seattle’s businesses will adapt without any net loss of employment. Seattle’s employers will also have more employees to choose from – as the $15 minimum attracts into the labor force some people who otherwise haven’t been interested. That means they’ll end up with workers who are highly reliable and likely to stay longer, resulting in real savings. Research by Michael Reich (no relation) and Arindrajit Dube confirms these results. They examined employment in several hundred pairs of adjacent counties lying on opposite sides of state borders, each with different minimum wages, and found no statistically significant increase in unemployment in the higher-minimum counties, even after four years. (Other researchers who found contrary results failed to control for counties where unemployment was already growing before the minimum wage was increased.) They also found that employee turnover was lower where the minimum was higher. Not every city or state can meet the bar Seattle has just set. But many can – and should.
[Accessing the above evidence was a significant factor in my choosing to read a $15/hour plan with Seattle as normal means in this debate. I thought it’d be an interesting approach to taking out unemployment turns.]
Cost-benefit analysis is feasible. Ignore any util calc indicts. Hardin 90 writes[7]
One of the cuter charges against utilitarianism is that it is irrational in the following sense. If I take the time to calculate the consequences of various courses of action before me, then I will ipso facto have chosen the course of action to take, namely, to sit and calculate, because while I am calculating the other courses of action will cease to be open to me. It should embarrass philosophers that they have ever taken this objection seriously. Parallel considerations in other realms are dismissed with eminently good sense. Lord Devlin notes, “If the reasonable man ‘worked to rule’ by perusing to the point of comprehension every form he was handed, the commercial and administrative life of the country would creep to a standstill.” James March and Herbert Simon escape the quandary of unending calculation by noting that often we satisfice, we do not maximize: we stop calculating and considering when we find a merely adequate choice of action. When, in principle, one cannot know what is the best choice, one can nevertheless be sure that sitting and calculating is not the best choice. But, one may ask, How do you know that another ten minutes of calculation would not have produced a better choice? And one can only answer, You do not. At some point the quarrel begins to sound adolescent. It is ironic that the point of the quarrel is almost never at issue in practice (as Devlin implies, we are almost all too reasonable in practice to bring the world to a standstill) but only in the principled discussions of academics.
[The purpose of the above evidence is to easily take out 95% of people’s generic AT util arguments about an inability to calculate. This evidence proves that their indicts are irrelevant for the purposes of practical decision-making. Making more specific responses to util calc indicts is advised as well, but since the 1AR is short it’s important to give yourself leeway in terms of being able to group arguments.]
Adopt a parliamentary model to account for moral uncertainty. This entails minimizing existential risks. Bostrom 9 writes[8]
It seems people are overconfident about their moral beliefs. But how should one reason and act if one acknowledges that one is uncertain about morality – not just applied ethics but fundamental moral issues? if you don’t know which moral theory is correct?
It doesn’t seem you can[’t] simply plug your uncertainty into expected utility decision theory and crank the wheel; because many moral theories state that you should not always maximize expected utility.
Even if we limit consideration to consequentialist theories, it still is hard to see how to combine them in the standard decision theoretic framework. For example, suppose you give X% probability to total utilitarianism and (100-X)% to average utilitarianism. Now an action might add 5 utils to total happiness and decrease average happiness by 2 utils. (This could happen, e.g. if you create a new happy person that is less happy than the people who already existed.) Now what do you do, for different values of X?
The problem gets even more complicated if we consider not only consequentialist theories but also deontological theories, contractarian theories, virtue ethics, etc. We might even throw various meta-ethical theories into the stew: error theory, relativism, etc.
I’m working on a paper on this together with my colleague Toby Ord. We have some arguments against a few possible “solutions” that we think don’t work. On the positive side we have some tricks that work for a few special cases. But beyond that, the best we have managed so far is a kind of metaphor, which we don’t think is literally and exactly correct, and it is a bit under-determined, but it seems to get things roughly right and it might point in the right direction:
The Parliamentary Model. Suppose that you have a set of mutually exclusive moral theories, and that you assign each of these some probability. Now imagine that each of these theories gets to send some number of delegates to The Parliament. The number of delegates each theory gets to send is proportional to the probability of the theory. Then the delegates bargain with one another for support on various issues; and the Parliament reaches a decision by the delegates voting. What you should do is act according to the decisions of this imaginary Parliament. (Actually, we use an extra trick here: we imagine that the delegates act as if the Parliament’s decision were a stochastic variable such that the probability of the Parliament taking action A is proportional to the fraction of votes for A. This has the effect of eliminating the artificial 50% threshold that otherwise gives a majority bloc absolute power. Yet – unbeknownst to the delegates – the Parliament always takes whatever action got the most votes: this way we avoid paying the cost of the randomization!)
The idea here is that moral theories get more influence the more probable they are; yet even a relatively weak theory can still get its way on some issues that the theory think are extremely important by sacrificing its influence on other issues that other theories deem more important. For example, suppose you assign 10% probability to total utilitarianism and 90% to moral egoism (just to illustrate the principle). Then the Parliament would mostly take actions that maximize egoistic satisfaction; however it would make some concessions to utilitarianism on issues that utilitarianism thinks is especially important. In this example, the person might donate some portion of their income to existential risks research and otherwise live completely selfishly.
I think there might be wisdom in this model. It avoids the dangerous and unstable extremism that would result from letting one’s current favorite moral theory completely dictate action, while still allowing the aggressive pursuit of some non-commonsensical high-leverage strategies so long as they don’t infringe too much on what other major moral theories deem centrally important.
[This evidence gets out of the usual AT Bostrom arguments. The usual responses apply to the usual Bostrom card; the warrant here is different. The evidence also helps you get out of “Bostrom bad” theory; their abuse story is usually about the aff reading util and then saying Bostrom’s on a higher layer, but this evidence actually assumes a weak risk of util, so the util justifications technically come first. For strategic purposes you can collapse to this in the 1AR as if it were the normal Bostrom card, though.]
Moral uncertainty means ignore skep and presumption because non-zero credence in the existence of morality means there’s always a risk of offense in favor of one moral action.
[The above pre-empts neg tricks. Sometimes I read a short presume aff argument in this spike as an even-if.]
The standard is maximizing happiness. Reasons to prefer:
First, revisionary intuitionism
Revisionary intuitionism is true and leads to util.
Yudkowsky 8 writes[9]
I haven’t said much about metaethics – the nature of morality – because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven’t gotten to yet. I used to be very confused about metaethics. After my confusion finally cleared up, I did a postmortem on my previous thoughts. I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless. And this appears to be a general syndrome – people do much better when discussing whether torture is good or bad than when they discuss the meaning of “good” and “bad”. Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can. Occasionally people object to any discussion of morality on the grounds that morality doesn’t exist, and in lieu of jumping over the forward dependency to explain that “exist” is not the right term to use here, I generally say, “But what do you do anyway?” and take the discussion back down to the object level. Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of “utilitarianism”, depend on “intuition”. He says I’ve argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to. Now “intuition” is not how I would describe the computations that underlie human morality and distinguish us, as moralists, from an ideal philosopher of perfect emptiness and/or a rock. But I am okay with using the word “intuition” as a term of art, bearing in mind that “intuition” in this sense is not to be contrasted to reason, but is, rather, the cognitive building block out of which both long verbal arguments and fast perceptual arguments are constructed. I see the project of morality as a project of renormalizing intuition. We have intuitions about things that seem desirable or undesirable, intuitions about actions that are right or wrong, intuitions about how to resolve conflicting intuitions, intuitions about how to systematize specific intuitions into general principles. Delete all the intuitions, and you aren’t left with an ideal philosopher of perfect emptiness, you’re left with a rock. Keep all your specific intuitions and refuse to build upon the reflective ones, and you aren’t left with an ideal philosopher of perfect spontaneity and genuineness, you’re left with a grunting caveperson running in circles, due to cyclical preferences and similar inconsistencies. “Intuition”, as a term of art, is not a curse word when it comes to morality – there is nothing else to argue from. Even modus ponens is an “intuition” in this sense – it‘s just that modus ponens still seems like a good idea after being formalized, reflected on, extrapolated out to see if it has sensible consequences, etcetera. So that is “intuition”. However, Gowder did not say what he meant by “utilitarianism”. Does utilitarianism say… That right actions are strictly determined by good consequences? That praiseworthy actions depend on justifiable expectations of good consequences? That probabilities of consequences should normatively be discounted by their probability, so that a 50% probability of something bad should weigh exactly half as much in our tradeoffs? That virtuous actions always correspond to maximizing expected utility under some utility function? That two harmful events are worse than one? That two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one? That for any two harms A and B, with A much worse than B, there exists some tiny probability such that gambling on this probability of A is preferable to a certainty of B? If you say that I advocate something, or that my argument depends on something, and that it is wrong, do please specify what this thingy is… anyway, I accept 3, 5, 6, and 7, but not 4; I am not sure about the phrasing of 1; and 2 is true, I guess, but phrased in a rather solipsistic and selfish fashion: you should not worry about being praiseworthy. Now, what are the “intuitions” upon which my “utilitarianism” depends? This is a deepish sort of topic, but I’ll take a quick stab at it. First of all, it’s not just that someone presented me with a list of statements like those above, and I decided which ones sounded “intuitive”. Among other things, if you try to violate “utilitarianism”, you run into paradoxes, contradictions, circular preferences, and other things that aren’t symptoms of moral wrongness so much as moral incoherence. After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out. This does not quite define moral progress, but it is how we experience moral progress. As part of my experienced moral progress, I’ve drawn a conceptual separation between questions of type Where should we go? and questions of type How should we get there? (Could that be what Gowder means by saying I’m “utilitarian”?) The question of where a road goes – where it leads – you can answer by traveling the road and finding out. If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner. When it comes to wanting to go to a particular place, this want is not entirely immune from the destructive powers of truth. You could go there and find that you regret it afterward (which does not define moral error, but is how we experience moral error). But, even so, wanting to be in a particular place seems worth distinguishing from wanting to take a particular road to a particular place. Our intuitions about where to go are arguable enough, but our intuitions about how to get there are frankly messed up. After the two hundred and eighty-seventh research study showing that people will chop their own feet off if you frame the problem the wrong way, you start to distrust first impressions. When you’ve read enough research on scope insensitivity – people will pay only 28% more to protect all 57 wilderness areas in Ontario than one area, people will pay the same amount to save 50,000 lives as 5,000 lives… that sort of thing… Well, the worst case of scope insensitivity I’ve ever heard of was described here by Slovic: Other recent research shows similar results. Two Israeli psychologists asked people to contribute to a costly life-saving treatment. They could offer that contribution to a group of eight sick children, or to an individual child selected from the group. The target amount needed to save the child (or children) was the same in both cases. Contributions to individual group members far outweighed the contributions to the entire group. There’s other research along similar lines, but I’m just presenting one example, ’cause, y’know, eight examples would probably have less impact. If you know the general experimental paradigm, then the reason for the above behavior is pretty obvious – focusing your attention on a single child creates more emotional arousal than trying to distribute attention around eight children simultaneously. So people are willing to pay more to help one child than to help eight. Now, you could look at this intuition, and think it was revealing some kind of incredibly deep moral truth which shows that one child’s good fortune is somehow devalued by the other children’s good fortune. But what about the billions of other children in the world? Why isn’t it a bad idea to help this one child, when that causes the value of all the other children to go down? How can it be significantly better to have 1,329,342,410 happy children than 1,329,342,409, but then somewhat worse to have seven more at 1,329,342,417? Or you could look at that and say: “The intuition is wrong: the brain can’t successfully multiply by eight and get a larger quantity than it started with. But it ought to, normatively speaking.” And once you realize that the brain can’t multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever. You don’t get the impression you’re looking at the revelation of a deep moral truth about nonagglomerative utilities. It’s just that the brain doesn’t goddamn multiply. Quantities get thrown out the window. If you have $100 to spend, and you spend $20 each on each of 5 efforts to save 5,000 lives, you will do worse than if you spend $100 on a single effort to save 50,000 lives. Likewise if such choices are made by 10 different people, rather than the same person. As soon as you start believing that it is better to save 50,000 lives than 25,000 lives, that simple preference of final destinations has implications for the choice of paths, when you consider five different events that save 5,000 lives. (It is a general principle that Bayesians see no difference between the long-run answer and the short-run answer; you never get two different answers from computing the same question two different ways. But the long run is a helpful intuition pump, so I am talking about it anyway.) The aggregative valuation strategy of “shut up and multiply” arises from the simple preference to have more of something – to save as many lives as possible – when you have to describe general principles for choosing more than once, acting more than once, planning at more than one time. Aggregation also arises from claiming that the local choice to save one life doesn’t depend on how many lives already exist, far away on the other side of the planet, or far away on the other side of the universe. Three lives are one and one and one. No matter how many billions are doing better, or doing worse. 3 = 1 + 1 + 1, no matter what other quantities you add to both sides of the equation. And if you add another life you get 4 = 1 + 1 + 1 + 1. That’s aggregation. When you’ve read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you’ve seen the “Dutch book” and “money pump” effects that penalize trying to handle uncertain outcomes any other way, then you don’t see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn’t goddamn multiply. The primitive, perceptual intuitions that make a choice “feel good” don’t handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words. When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don’t think that their protestations reveal some deep truth about incommensurable utilities. Part of it, clearly, is that primitive intuitions don’t successfully diminish the emotional impact of symbols standing for small quantities – anything you talk about seems like “an amount worth considering”. And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there’s any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole. So it seems like there should be an unconditional social injunction against preferring money to life, and no “but” following it. Not even “but a thousand dollars isn’t worth a 0.0000000001% probability of saving a life”. Though the latter choice, of course, is revealed every time we sneeze without calling a doctor. The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect. On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule. But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority. As Peter Norvig once pointed out, if Asimov’s robots had strict priority for the First Law of Robotics (“A robot shall not harm a human being, nor through inaction allow a human being to come to harm”) then no robot’s behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision. Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility. I don’t say that morality should always be simple. I’ve already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize – that the valuation of this one event is more complex than I know. But that’s for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided – at least if you care more about the destination than the journey. When you’ve reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as “Shut up and multiply.” Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply. It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math. And that’s why I’m a utilitarian – at least when I am doing something that is overwhelmingly more important than my own feelings about it – which is most of the time, because there are not many utilitarians, and many things left undone.
[Although Yudkowsky is long, this card is pretty effective. Three reasons—
Second, absolutism fails. It can’t explain empirical uncertainty.
Jackson and Smith 6 writes[10]
A skier is heading in a direction you know for sure will trigger an avalanche that will kill ten people. You know the only way to save the ten people is for you to shoot him. The probability that the skier intends to trigger the avalanche and kill the ten people is 1-p. We can agree that our target absolutist theory says it is right for you to shoot if it is certain that the skier intends to kill the ten, that is, if p = 0, for in that case you would not be killing someone innocent—you would be protecting the ten in the only way possible from an unprovoked attack. We can agree that our target theory says that it would be wrong for you to shoot if you are certain he simply happens to be skiing in that direction, that is, if p = 1, for then you would be intentionally killing someone innocent, and that is never right no matter how many you will be allowing to die by your failure to shoot. The number of lives that would be saved in the example as described is ten, but of course the distinctive position of absolutism is that the number does not matter: it is never right intentionally to kill the innocent no matter how many lives would be saved by doing so. Our question is, What should the theory say for other values of p? III. THE INFINITE DISVALUE APPROACH Perhaps the simplest absolutist answer to our question is to hold that whenever there is any chance that an action violates an absolute prohibition, the action ought not to be performed. This is the answer suggested by the absolutists’ case against early stage abortion summarized above. In our example, the answer would prohibit shooting the skier whenever there is any chance that he is innocent, whenever, that is, p < 1. One way of implementing this answer is to assign infinite disvalue to intentionally killing the innocent and some finite disvalue to allowing people to die. For then the expected disvalue of the shooting—that is, the product of the disvalue of intentionally killing the innocent times the chance that the shooting is an intentional killing of the innocent—will exceed the disvalue of allowing others to die, no matter how many others die and how certain it is that they will die, provided there is some chance that the shooting is indeed an intentional killing of the innocent. It will, on this approach, be impossible to make the action that has some chance of being the intentional killing of someone innocent the right thing to do by making the number allowed to die by refraining from shooting large enough—the numbers allowed to die will be irrelevant, just as absolutists typically say.6 The trouble with this response is that there is nearly always some greater than zero chance that someone is innocent. All the evidence may be against them, but induction from the past record of over turned verdicts in cases that looked watertight at the time tells us that there is nearly always some chance that someone who looks clearly to be guilty is in fact innocent. We will get the result that it is never, or hardly ever, right to shoot the skier. Indeed, it will be hard to find any cases where it is right intentionally to kill someone as there is always some chance that the someone is innocent, and a small chance times an infinite disvalue equals an infinite dis value. We will have a quick (too quick) argument from absolutism against intentionally killing the innocent to an extreme kind of personal pacifism.
[The purpose of this card is to preempt deontology. It also provides a practical defense of util that gets you out of this “util can’t say things are intrinsically prohibited” stuff that people say.]
Third, universalizability justifies util. Singer 93[11]
The universal aspect of ethics, I suggest, does provide a persuasive, although not conclusive, reason for taking a broadly utilitarian position. My reason for suggesting this is as follows. In accepting that ethical judgments must be made from a universal point of view, I am accepting that my own interests cannot, simply because they are my interests, count more than the interests of anyone else. Thus my very natural concern that my own interests be looked after must, when I think ethically, be extended to the interests of others. Now, imagine that I am trying to decide between two possible courses of action – perhaps whether to eat all the fruits I have collected myself, or to share them with others. Imagine, too, that I am deciding in a complete ethical vacuum, that I know nothing of any ethical considerations – I am, we might say, in a pre-ethical stage of thinking. How would I make up my mind? One thing that would be still relevant would be how the possible courses of action will affect my interests. Indeed, if we define ‘interests’ broadly enough, so that we count anything people desire as in their interests (unless it is incompatible with another desire or desires), then it would seem that at this pre-ethical stage, only one’s own interests can be relevant to the decision. Suppose I then begin to think ethically, to the extent of recognizing that my own interests cannot count for more, simply because they are my own, than the interests of others. In place of my own interests, I now have to take into account the interests of all those affected by my decision. This requires me to weigh up all these interests and adopt the course of action most likely to maximize the interests of those affected.
Morality must take the form of a universal rule. Singer 9 writes[12]
When I prescribe something, using moral language, my prescription commits me to a substantive moral judgment about all relevantly similar cases. This includes hypothetical cases in which I am in a different position from my actual one. So to make a moral judgment, I must put myself in the position of the other person affected by my proposed action – or to be more precise, in the position of all those affected by my action. Whether I can accept the judgment – that is, whether I can prescribe it universally – will then depend on whether I could accept it if I had to live the lives of all those affected by the action.
[Singer’s defending Hare’s understanding of universalizability which is different from Kant’s. These two cards have multiple strategic purposes—
Another thing to note is that these Singer cards defend preference util. If you have a strategy in mind that draws on the nuances of preference util, cutting more Singer cards is worthwhile (granted he recently made the switch to Sidgwick-style util but his earlier work is preference util). For all practical purposes, though, it doesn’t matter that the Singer arg is intended to be for preference util, just as “consequentialism good” is sufficient to get you util offense in a debate although technically speaking it’s not enough.]
Fourth, respect for human worth would justify util. Cummiskey 90[13]
We must not obscure the issue by characterizing this type of case as the sacrifice of individuals for some abstract “social entity.” It is not a question of some persons having to bear the cost for some elusive “overall social good.” Instead, the question is whether some persons must bear the inescapable cost for the sake of other persons. Robert Nozick, for example, argues that “to use a person in this way does not sufficiently respect and take account of the fact that he is a separate person, that his is the only life he has.” But why is this not equally true of all those whom we do not save through our failure to act? By emphasizing solely the one who must bear the cost if we act, we fail to sufficiently respect and take account of the many other separate persons, each with only one life, who will bear the cost of our inaction. In such a situation, what would a conscientious Kantian agent, an agent motivated by the unconditional value of rational beings, choose? A morally good agent recognizes that the basis of all particular duties is the principle that “rational nature exists as an end in itself”. Rational nature as such is the supreme objective end of all conduct. If one truly believes that all rational beings have an equal value, then the rational solution to such a dilemma involves maximally promoting the lives and liberties of as many rational beings as possible. In order to avoid this conclusion, the non-consequentialist Kantian needs to justify agent-centered constraints. As we saw in chapter 1, however, even most Kantian deontologists recognize that agent-centered constraints require a non- value-based rationale. But we have seen that Kant’s normative theory is based on an unconditionally valuable end. How can a concern for the value of rational beings lead to a refusal to sacrifice rational beings even when this would prevent other more extensive losses of rational beings? If the moral law is based on the value of rational beings and their ends, then what is the rationale for prohibiting a moral agent from maximally promoting these two tiers of value? If I sacrifice some for the sake of others, I do not use them arbitrarily, and I do not deny the unconditional value of rational beings. Persons may have “dignity, that is, an unconditional and incomparable worth” that transcends any market value, but persons also have a fundamental equality that dictates that some must sometimes give way for the sake of others. The concept of the end-in-itself does not support the view that we may never force another to bear some cost in order to benefit others.
[This is the classic AT Deon preempt. There are better versions of the Cummiskey argument in the “Kantian Consequentialism” article but I’ve been reading this super-short version just because it’s efficient. It’s strategic because you can say “deon devolves to util” which enables you to say “even if they win their util indicts, the NC framework devolves to util anyway so those indicts are non-unique”. Being able to go for “ ‘x’ framework devolves to util” is one of the unique strategic features of the util framework and gives you outs in short rebuttals.]
Fifth, phenomenal introspection
Widespread moral disagreement means we need to find a more reliable process for belief formation
Sinhababu no date writes[14]
Now I will defend 3. First, I will show how the falsity of others’ beliefs undermines one’s own belief. Then I will clarify the notion of a reliable process. I will consider a modification to 3 that epistemic internalists might favor, and show that the argument accommodates it. I will illustrate 3’s plausibility by considering cases in which it correctly guides our reasoning. Finally, I will show how 3 is grounded in the intuitive response to grave moral error. First, a simple objection: “Why should I care whether other people have false beliefs? That is a fact about other people, and not about me. Even if most people are wrong about some topic, I may be one of the few right ones, even if there is no apparent reason to think that my way of forming beliefs is any more reliable.” While widespread error leaves open the possibility that one has true beliefs, it reduces the probability that my beliefs are true. Consider a parallel case. I have no direct evidence that I have an appendix, but I know that previous investigations have revealed appendixes in people. So induction supports believing that I have an appendix. Similarly, I know on the basis of 1 and 2 that people’s moral beliefs are, in general, rife with error. So even if I have no direct evidence of error in my moral beliefs, induction suggests that they are rife with error as well. 3 invokes the reliability of the processes that produce our beliefs. Assessing processes of belief-formation for reliability is an essential part of our epistemic practices. If someone tells me that my belief is entirely produced by wishful thinking, I cannot simply accept that and maintain the belief. Knowing that wishful thinking is unreliable, I must either deny that my belief is entirely caused by wishful thinking or abandon the belief. But if someone tells me that my belief is entirely the result of visual perception, I will maintain it, assuming that it concerns sizable nearby objects or some other topic on which visual perception is reliable. While it is hard to provide precise criteria for individuating processes of belief-formation, as the literature on 11 the generality problem for reliabilism attests, individuating them somehow is indispensable to our epistemic practices.15 Following Alvin Goldman’s remark that “It is clear that our ordinary thought about process types slices them broadly” (346), I will treat cognitive process types like wishful thinking and visual perception as appropriately broad.16 Trusting particular people and texts, meanwhile, are too narrow. Cognitive science may eventually help us better individuate cognitive process types for the purposes of reliability assessments and discover which processes produce which beliefs. Epistemic internalists might reject 3 as stated, claiming that it is not widespread error that would justify giving up our beliefs, but our having reason to believe that there is widespread error. They might also claim that our justification for believing the outputs of some process depends not on its reliability, but on what we have reason to believe about its reliability. The argument will still go forward if 3 is modified to suit internalist tastes, changing its antecedent to “If we have reason to believe that there is widespread error about a topic” and/or changing its consequent to “we should retain only those beliefs about it that we have reason to believe were formed through reliable processes.” While 3’s antecedent might itself seem unnecessary on the original formulation, it is required for 3 to remain plausible on the internalist modification. Requiring us to have reason to believe that any of our belief-formation processes are reliable before retaining their outputs might lead to skepticism. The antecedent limits the scope of the requirement to cases of widespread error, averting general skeptical conclusions. The argument will still attain its conclusion under these modifications. Successfully 15 Earl Conee and Richard Feldman, “The Generality Problem for Reliabilism,” Philosophical Studies 89 (1998):1-29. The generality problem seeems to apply to all plausible epistemological views, as argued by Juan Comesaña, “A Well-Founded Solution to the Generality Problem,” Philosophical Studies 129 (2006): 27-47. 16 Alvin Goldman, “What is Justified Belief?” in Sosa and Kim (eds.), Epistemology: An Anthology, Massachusetts: Blackwell (2000). 12 defending the premises of the argument and deriving widespread error (5) and unreliability (7) gives those of us who have heard the defense and derivation reason to believe 5 and 7. This allows us to derive 8. (Thus the pronoun ‘we’ in 3, 6, and 8.) 3 describes the right response to widespread error in many actual cases. Someone in the 12th century, especially upon hearing the disagreeing views of many cultures regarding the origins of the universe, would do well to recognize that error on this topic was widespread and retreat to agnosticism about it. Only when modern astrophysics extended reliable empirical methods to cosmology would it be rational to move forward from agnosticism and accept a particular account of how the universe began. Similarly, there is usually widespread disagreement among investors about which stocks will perform better than average, suggesting that one’s beliefs on the matter have a high likelihood of error. It is wise to remain agnostic about the stock market without a way of forming beliefs with above-average reliability – for example, the sort of secret insider information that it is illegal to trade on. 3 permits us to hold fast to our moral beliefs in individual cases of moral disagreement, suggesting skeptical conclusions only when moral disagreement is widespread. When we consider a single culture’s abhorrent moral views, like the Greeks’ acceptance of Telemachus and Odysseus’ murders of the servant women, our immediate thought is not that perhaps the Greeks were right to see nothing wrong and we should reconsider our outrage. Instead, we are horrified by their grave moral error. I think this is the right response. We are similarly horrified by the moral errors of Hindus who burned widows on their husbands’ funeral pyres, American Southerners who supported slavery and segregation, our contemporaries who condemn homosexuality, and countless others. The sheer number of cases like this then requires us to regard moral error as a pervasive feature of the human condition. Humans typically form moral beliefs through unreliable processes and have appendixes. As we are human beings, this should reduce our confidence in our moral judgments. The prevalence of error in a world full of moral 13 disagreement demonstrates how bad humans are at forming true moral beliefs, undermining our own moral beliefs. Wary of the unreliable processes that so often lead humans to their moral beliefs, we will need our moral beliefs to issue from reliable processes.
Phenomenal introspection is a uniquely reliable process of belief formation and entails that happiness is objectively good
Sinhababu no date writes[15]
Phenomenal introspection, a reliable way of forming true beliefs about our experiences, tells us that pleasure is good and displeasure is bad. Even as our other processes of moral belief formation prove unreliable, it provides reliable access to pleasure’s goodness, justifying the positive claims of hedonism. This section clarifies what phenomenal introspection and pleasure are, and explains how phenomenal introspection provides reliable access to pleasure’s value. Section 2.2 argues that pleasure’s goodness is genuine moral value, rather than value of some other kind. To use phenomenal introspection is to look inward at one’s subjective experience, or phenomenology, and determine what it is like. One can use phenomenal introspection reliably while dreaming or hallucinating, as long as one can determine what the dream or hallucination is like. By itself, phenomenal introspection produces no beliefs about things outside experience, or about relations between our experiences and non-experiential things. It cannot by itself produce judgments about the rightness of actions or the goodness of non-experiential things, as these are located outside of experience. Phenomenal introspection can be wrong, but is generally reliable. As experience is rich in detail, one could get some of the details wrong in one’s belief. Under adverse conditions when one has false expectations about what one’s experiences will be, or when one is in an extreme emotional state, one might make larger errors. Paradigmatically reliable processes like vision share these failings. Vision sometimes produces false beliefs under adverse conditions, or when we are looking at complex things. It is, nevertheless, fairly reliable. The view that phenomenal introspection is unreliable about our phenomenal states is about as radical as skepticism about the reliability of vision. While contemporary psychologists reject introspection into one’s motivations and other causal processes as unreliable, phenomenal introspection fares better. Daniel Kahneman, for example, writes that “experienced utility is best measured by moment-based methods that assess the experience of the present.”20 Even those most skeptical about the reliability of phenomenal introspection, like Eric Schwitzgebel, concede that we can reliably introspect whether we are in serious pain.21 Then we should be able to introspectively determine what pain is like. I assume the reliability of phenomenal introspection in what follows. One can form a variety of beliefs using phenomenal introspection. For example, one can believe that one is having sound experiences of particular noises and visual experiences of different shades of color. When looking at a lemon and considering the phenomenal states that are yellow experiences, one can form some beliefs about their intrinsic features – for example, that they are bright experiences. And when considering experiences of pleasure, one can make some judgments about their intrinsic features – for example, that they are good experiences. Just as one can look inward at one’s experience of lemon yellow and appreciate its brightness, one can look inward at one’s experience of pleasure and appreciate its goodness.22 When I consider a situation of increasing pleasure, I can form the belief that things are better than they were before, in the same way I form the belief that there is more brightness in my visual field as lemon yellow replaces black. And when I suddenly experience pain, I can form the belief that things are worse in my experience than they were before. “Pleasure” here refers to the hedonic tone of experience. Having pleasure consists in one’s experience having this hedonic tone. Without descending into metaphor, it is hard to give a further account of what pleasure is like than to say that when one has it, one feels good. As Aaron Smuts writes in defending the view of pleasure as hedonic tone, “to ‘feel good’ is about as close to an experiential primitive as we get.”23 Some philosophers, like Fred Feldman, see pleasure as fundamentally an attitude rather than a hedonic tone.24 But as long as hedonic tones – good and bad feelings – are real components of experience, phenomenal introspection will reveal pleasure’s goodness. Opponents of the hedonic tone account of pleasure usually concede that hedonic tones exist, as Feldman seems to in discussing “sensory pleasures,” which he thinks his view helps us understand. Even on his view of pleasure, phenomenal introspection can produce the belief that some hedonic tones are good while others are bad. There are many different kinds of pleasant experiences. There are sensory pleasures, like the pleasure of tasting delicious food, receiving a massage, or resting your tired limbs in a soft bed after a hard day. There are the pleasures of seeing that our desires are satisfied, like the pleasure of winning a game, getting a promotion, or seeing a friend succeed. These experiences differ in many ways, just as the experiences we have when looking at lemons and the sky on a sunny day differ. It is easy to see the appeal of Feldman’s view that pleasures “have just about nothing in common phenomenologically” (79). But just as our experiences in looking at lemons and the sky on a sunny day have brightness in common, pleasant experiences all have “a certain common quality – feeling good,” as Roger Crisp argues (109).25 As the analogy with brightness suggests, hedonic tone is phenomenologically very thin, and usually mixed with a variety of other experiences.26 Pleasure of any kind feels good, and displeasure of any kind feels bad. These feelings may or may not have bodily location or be combined with other sensory states like warmth or pressure. “Pleasure” and “displeasure” mean these thin phenomenal states of feeling good and feeling bad. As Joseph Mendola writes, “the pleasantness of physical pleasure is a kind of hedonic value, a single homogenous sensory property, differing merely in intensity as well as in extent and duration, which is yet a kind of goodness” (442). 27 What if Feldman is right and hedonic states feel good in fundamentally different kinds of ways? Then phenomenal introspection will suggest a pluralist variety of hedonism. Each fundamental flavor of pleasure will have a fundamentally different kind of goodness, as phenomenal introspection that is more accurate than mine will reveal. This is not my view, but I suggest it to those convinced that hedonic tones are fundamentally heterogenous. If phenomenal introspection reliably informs us that pleasure is good, how can anyone believe that their pleasures are bad? Hedonists can blame other processes of moral beliefformation for these beliefs. For example, someone who feels disgust or guilt about sex may not only regard sex as immoral, but the pleasure it produces as bad. Even if phenomenal introspection on pleasure disposes one to believe that it is good, stronger negative emotional responses to it may more strongly dispose one to believe that it is bad. This explanation of disagreement about pleasure’s value lets hedonists deny that people believe that pleasure is bad on the basis of phenomenal introspection alone. As long as negative judgments of pleasure come from unreliable processes instead of phenomenal introspection, the argument from disagreement will eliminate them, while the reliable process of phenomenal introspection will univocally support pleasure’s goodness. The parallel between yellow’s brightness and pleasure’s goodness demonstrates the objectivity of the value detected in phenomenal introspection. Just as anyone’s yellow experiences objectively are bright experiences, anyone’s pleasure objectively is a good experience.28 While one’s phenomenology is often called one’s “subjective experience”, this does facts about it are still objective. “Subjective” in “subjective experience” means “internal to the mind”, not “ ontologically dependent on attitudes towards it.” My yellow-experiences are objectively bright, so that anyone who thought my yellow-experiences were not bright would be mistaken. Pleasure similarly is objectively good – it is true that anyone’s pleasure is good, and anyone who denies this is mistaken. As Mendola writes, “In the phenomenal value of phenomenal experience, we have a plausible candidate for objective value” (712).
This justifies util
Sinhababu no date writes[16]
Even though phenomenal introspection only tells me about my own phenomenal states, I can know that others’ pleasure is good. Of course, I cannot phenomenally introspect their pleasures any more than I can phenomenally introspect pleasures that I will experience next year. But if I consider my experiences of lemon yellow and ask what it would be like if others had the same experiences, I must think that they would be having bright experiences. Similarly, if in a pleasant moment I consider what it is like when others have exactly the experience I am having, I must think that they are having good experiences. If they have exactly the same experiences I am having, their experiences will have exactly the same intrinsic properties as mine. This is also how I know that if I have the same experience in the future, it will have the same intrinsic properties. Even though the only pleasure I can introspect is mine now, I should believe that pleasures experienced by others and myself at other times are good, just as I should believe that yellow experienced by others and myself at other times is bright. My argument thus favors the kind of universal hedonism that supports utilitarianism, not egoistic hedonism.
[This is a more nuanced version of the “happiness is objectively good, means util” argument. I think it’s a really interesting one that you can use to leverage vs NC frameworks that aren’t based in phenomenal introspection; the Sinhababu ev says that phenomenal introspection is key to avoiding error-prone methods of belief formation]
And sixth, util is best for practical decision-making. It’s key to the very functioning of institutions.
Bowden 9 writes[17]
The most significant reason for advocating utility theory however, is that it is useful and usable. The institutions in our society – the professional, industry and special interest groups, as well as organisations in business and government, plus the not for profit sector – are faced with many ethical decisions, often complex and difficult, requiring considerable thought, and eventually resolution. The moral issues that arise in these contexts are fundamental to the institutional functioning of our society. Yet very few people have training in moral philosophy. They need a relatively straightforward way of making these decisions – of telling right from wrong. Mill, it will be argued, provides that method. Many who have no training that are faced with these ethical choices will rely on intuition. Perhaps they will use a set of values learned at home, or from their schooling or their church. As we shall see for the more difficult ethical issues, however, intuition is an unreliable guide. If they have training, they may remember virtue ethics, or Kant’s deontology, but as I shall also argue later, these theories do not necessarily give straightforward and acceptable answers. The statement that some ethical issues are difficult to resolve should generate little disagreement. Any teacher of professional ethics can identify issues where the profession disagrees on the ethics of a particular practice. Reverse auctions, for instance, where providers of the product or service bid increasingly lower prices, have generated debate on whether we are sacrificing quality or safety for a lower price. Front end loading, where the work items executed earlier are loaded 3 with a higher percentage of the supplier‟s overheads has generated a similar debate. Whistleblowing is yet another issue where the ethics are debated – whether the person revealing the wrongdoing is ignoring the ethical obligation of loyalty to his or her employer. Or whether the risk of retaliation and losing one‟s job outweighs the moral obligation to reveal the truth. We are also all aware of the concept of group think, where people in an organisation tend to accept the prevailing opinion, rather than question it. This may have been a cause behind many of the ethical failures seen over recent years in HIH, James Hardie, the Australian Wheat Board and other companies. The fact that no executive spoke out against the unethical behaviours then practised tells us that those who want honesty and transparency were not confident enough of themselves or their judgement to speak out. Alternatively, the failure to speak out may have been due to the tendency to find a justification for an unethical action. Wheat Board people possibly convinced themselves for instance, that they were acting in the best interests of the Australian farmer, and therefore of the nation. And so the national benefits outweighed the negatives of their action. A relatively straight forward way to cut through such fuzzy thinking would be the prior resolution of many of these issues. Utilitarianism, it will be argued, provides that method. It would give those who wish to live and work within an ethical environment stronger tools with which to decide how they should react.
[This argument isn’t the best framework card I’ve ever read but you can use it in the same way as “policymaking=util” for a “state/institutions first” argument on the framework debate. I recommend cutting Bowden’s comparison between util and Rawls; I think that’s a better articulation of the “util’s better for institutions” argument but one that isn’t functional against all frameworks (because it’s a Rawls-specific comparison).]
Theory pre-empts:
Aff gets RVIs on I meets and counter-interps because
(a) 1AR time skew means I can’t cover theory and still have a fair shot at substance.
(b) no risk theory would give neg a free source of no risk offense which allows him to moot the AC.
[Watch out for theory about how you can’t say you get the RVI on both I meets AND counter-interps (oh nooooo much abuse!!!!!). That shell isn’t too hard to answer, so it’s all good. I think that the warrants for the RVI given here logically justify getting the RVI on the I meet as well as the counter-interp (short 1AR means you need outs on the theory debate, also the I meet proves that it was unjustified for the neg to moot the AC). If you go for 1AR theory a lot I don’t recommend reading RVIs good in the aff (unless the RVI justifications could only possibly be aff-specific, whereas the “moots the AC” warrant logically justifies an RVI for the NC to avoid mooting the NC).]
Neg burden is to defend a competitive post-fiat advocacy. Offense-defense is key to fairness and real world education. This means ignore skepticism, permissibility, and presumption.
Nelson 8 writes[18]
And the truth-statement model of the resolution imposes an absolute burden of proof on the affirmative: if the resolution is a truth-claim, and the affirmative has the burden of proving that claim, in so far as intuitively we tend to disbelieve truthclaims until we are persuaded otherwise, the affirmative has the burden to prove that statement absolutely true. Indeed, one of the most common theory arguments in LD is conditionality, which argues it is inappropriate for the affirmative to claim only proving the truth of part of the resolution is sufficient to earn the ballot. Such a model of the resolution also gives the negative access to a range of strategies that many students, coaches, and judges find ridiculous or even irrelevant to evaluation of the resolution. If the negative need only prevent the affirmative from proving the truth of the resolution, it is logically sufficient to negate to deny our ability to make truth-statements or to prove normative morality does not exist or to deny the reliability of human senses or reason. Yet, even though most coaches appear to endorse the truth-statement model of the resolution, they complain about the use of such negative strategies, even though they are a necessary consequence of that model. And, moreover, such strategies seem fundamentally unfair, as they provide the negative with functionally infinite ground, as there are a nearly infinite variety of such skeptical objections to normative claims, while continuing to bind the affirmative to a much smaller range of options: advocacy of the resolution as a whole. Instead, it seems much more reasonable to treat the resolution as a way to equitably divide ground: the affirmative advocating the desirability of a world in which people adhere to the value judgment implied by the resolution and the negative advocating the desirability of a world in which people adhere to a value judgment mutually exclusive to that implied by the resolution. By making the issue one of desirability of competing world-views rather than of truth, the affirmative gains access to increased flexibility regarding how he or she chooses to defend that world, while the negative retains equal flexibility while being denied access to those skeptical arguments indicted above. Our ability to make normative claims is irrelevant to a discussion of the desirability of making two such claims. Unless there is some significant harm in making such statements, some offensive reason to reject making them that can be avoided by an advocacy mutually exclusive with that of the affirmative such objections are not a reason the negative world is more desirable, and therefore not a reason to negate. Note this is precisely how things have been done in policy debate for some time: a team that runs a kritik is expected to offer some impact of the mindset they are indicting and some alternative that would solve for that impact. A team that simply argued some universal, unavoidable, problem was bad and therefore a reason to negate would not be very successful. It is about time LD started treating such arguments the same way. Such a model of the resolution has additional benefits as well. First, it forces both debaters to offer offensive reasons to prefer their worldview, thereby further enforcing a parallel burden structure. This means debaters can no longer get away with arguing the resolution is by definition true of false. The “truth” of the particular vocabulary of the resolution is irrelevant to its desirability. Second, it is intuitive. When people evaluate the truth of ethical claims, they consider their implications in the real world. They ask themselves whether a world in which people live by that ethical rule is better than one in which they don’t. Such debates don’t happen solely in the abstract. We want to know how the various options affect us and the world we live in.
[The purpose of this card defending the offense/defense paradigm is to exclude NCs that don’t defend a world that are usually things like skep, “states aren’t moral actors”, etc. It could also be used to take out pre-fiat Ks. The idea is that the neg has to defend the status quo or a competitive counterplan. AC offense functions under truth testing, though, so you don’t need to win this argument to win the debate. Paradigm questions like O/D aren’t drop the debater issues, either; they’re just reasons to reframe the debate in a certain way.]
I’m willing to clarify or alter my advocacy in cross-ex.
[This can be leveraged on T to say “I would have clarified/altered the aff if he/she wanted me to, solve the abuse”. This also doesn’t violate CX checks bad since you aren’t forcing the neg to clarify.]
Finally, the neg must defend one unconditional advocacy. Conditionality is bad because it makes the neg a moving target which kills 1AR strategy. He’ll kick it if I cover it and extend it if I undercover it, meaning I have no strategic options. Also, it’s unreciprocal because I can’t kick the AC.
[This arg responds to conditional counterplans/people reading skep and then going for link turns that logically entail the status quo, etc. In the 1AR, you can decide whether this is drop the debater or drop the arg.]
[1] Andrew Soergel (economy reporter. He studied Business Journalism at Washington and Lee University, was president of Washington and Lee’s Club Soccer team, and also has proficiency with the following computer programs and software: Bloomberg Terminal, Microsoft Office, Avid Media Composer, Final Cut Pro, iNews, WordPress, Adobe InDesign and InCopy, Photoshop, Soundslides, Brightspot CMS, HTML, Hootesuite and various social media outlets). “Study: Local Economies Still Recovering from Recession.” US News and World Report. January 19th, 2015. http://www.usnews.com/news/articles/2015/01/19/local-economies-havent-recovered-from-recession-equally-says-national-association-of-counties
[2] Bernie Sanders (independent U.S. senator from Vermont, former chairman of the Senate Veterans’ Affairs Committee, studied at UChicago, was an organizer for the Student Nonviolent Coordinating Committee during the Civil Rights Movement). “We Need a Pro-Worker, Anti-Austerity Agenda.” Campaign for America’s Future. February 5th, 2015. http://ourfuture.org/20150205/we-need-a-pro-worker-anti-austerity-agenda
[3] Josh Harkinson (staff reporter). “Study: Income Inequality Kills Economic Growth.” Mother Jones. October 4th, 2011. http://www.motherjones.com/mojo/2011/10/study-income-inequality-kills-economic-growth
[4] Mathew, PhD European History at Cambridge, counselor in the National Intelligence Council (NIC) and Jennifer, member of the NIC’s Long Range Analysis Unit “Revisiting the Future: Geopolitical Effects of the Financial Crisis” http://www.ciaonet.org/journals/twq/v32i2/f_0016178_13952.pdf
[5] The Economist. “The cheque is in the mail.” November 19th, 2013. http://www.economist.com/blogs/democracyinamerica/2013/11/government-guaranteed-basic-income
[6] Robert Reich (former labor secretary). “Seattle has set the bar for living wages.” Salon. June 8th, 2014. http://www.salon.com/2014/06/08/robert_reich_seattle_is_leading_a_long_overdue_movement_toward_a_living_wage_partner/
[7] Hardin, Russell (Helen Gould Shepard Professor in the Social Sciences @ NYU). May 1990. Morality within the Limits of Reason. University Of Chicago Press. pp. 4. ISBN 978-0226316208.
[8] Bostrom, Nick (Existentialist of a different sort). “Moral uncertainty – toward a solution?” 1 January 2009. http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html
[9] Eliezer Yudkowsky (research fellow of the Machine Intelligence Research Institute; he also writes Harry Potter fan fiction). “The ‘Intuitions’ Behind ‘Utilitarianism.’” 28 January 2008. LessWrong. http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/
[10] Frank Jackson (Australian National University) and Michael Smith (Princeton). “Absolutist Moral Theories and Uncertainty.” The Journal of Philosophy, Vol. 103, No. 6 (June 2006), pp. 267-283. http://www.jstor.org/stable/20619943
[11] Peter Singer, “Practical Ethics,” Second Edition, Cambridge University Press, 1993, pp. 13-14
[12] Peter Singer [Ira W. DeCamp Professor of Bioethics, Princeton], “The Groundwork of Utilitarian Morals: Reconsidering Hare’s Argument for Utilitarianism,” draft prepared for the Conference on Issues in Modern Philosophy: “The Foundations of Morality,” NYU Philosophy Department, November 7, 2009, 34.
[13] Cummiskey, David. Associate professor of philosophy at the University of Chicago. “Kantian Consequentiaism.” Ethics 100 (April 1990), University of Chicago. http://www.jstor.org/stable/2381810
[14] Neil Sinhababu (National University of Singapore). “The epistemic argument for hedonism.” No date.
[15] Neil Sinhababu (National University of Singapore). “The epistemic argument for hedonism.” No date.
[16] Neil Sinhababu (National University of Singapore). “The epistemic argument for hedonism.” No date.
[17] Peter Bowden (University of Sydney, Australian Association for Professional and Applied Ethics). “In Defense of Utilitarianism.” SSRN. June 1st, 2009. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1534305
[18] Adam F. Nelson, J.D.1. Towards a Comprehensive Theory of Lincoln-Douglas Debate. 2008.
3 Comments
Hey, these are really helpful. J saying.
This helps me understand case strategy tremendously, but I just have a question. What happens if you lose the framework debate to a deontology/libertarian case? Won’t it eliminate a lot of offense, and if so, how would you respond(if you lost framework)?
Hi Haritha,
Naturally, you want to keep your framework alive and win your framework. There are so many diverse util justifications in here that the neg has a lot to contend with if they want to win their framework.
Let’s say the neg does win their framework. Not all is lost. The purpose of the Bostrom card is to prove why human extinction would be extremely important under all frameworks. So, if you’re losing the framework debate, you can concede to your opponent’s framework and just prove why extinction matters the most under it.
Another option is conceding away your framework and contentions entirely (known as “kicking the aff”) and going for 4 minutes of case turns in the 1AR. That’s how I won the finals of the 2015 Wake Forest Earlybird!
Good luck. I hope this helps!