The topic for today is what to make of arguments like “your interpretation X leads to Y bad thing” and the common response “but you could read theory on Y bad thing.” Here’s an example:[1]
Here’s another:
The 2NR’s response is not side specific.[2] Nor is it specific to Skepticism or PICs.[3]
The first time I thought about these arguments was during JF16, the handgun ban topic, where they proved essential to T debate about one of the best affs.[4] I heard them again at Bronx this past weekend. I think the “X leads to Y bad thing” is a good argument under certain conditions and the “you can run theory” response is almost always bad. Here’s why.
Let’s disambiguate. When debaters say “X leads to Y,” they could mean “X justifies Y” or “X increases the frequency of Y.” The former is legitimate; the latter on its own is not.
Arguing X justifies Y means the warrants for X are warrants for (allowing) Y. This version is readily applicable to the truth testing scenario in my introduction. The warrants for truth testing often imply or increase the legitimacy of skepticism, NIBs, a prioris, etc. For example, David Branse argues that truth testing is valuable because it allows debaters to “challenge all ethical assumptions we hold.” Turning that implication should be offense for the respondent to truth testing. In other words, if the implication is won, then the truth tester is stuck to it. They don’t get to say “no, I don’t defend Y” because proving X justifies Y is proof they defend Y.
This is straightforward, but there is a limitation: the size of the impact should be determined by the likelihood or frequency of those strategies. If they’re not common in the metagame, then they’re not a big theoretical disadvantage. In 2011, triggering presumption and/or permissibility, skepticism, expressivism, error theory, and the like were much more popular, so the impact then was bigger than it is today.
It’s harder to see how this gloss on the argument applies in the PICs scenario, but it does. Assume the aff is right that a world without specification is one with more PICs, which are unfair. The theory initiator forwards an interpretation “do not specify.” While the standard-level warrants do not obviously justify PICs, they justify the interpretation, which describes a world that includes more PICs. It’s the same as the truth testing scenario but with an extra step.[5]
Arguing X increases Y means that when X is forwarded, we should expect a greater likelihood of Y. This version on its own is bad. It assumes debaters should be stuck to all the possible effects of their argumentation beyond what they defend and justify.
Suppose I read a run-of-the-mill plan. It’s disclosed, there’s a solvency advocate, there are many topic disads available. But! There is a popular article defending an extremely tiny and unfair PIC against the plan. Or perhaps there’s an article about a squirrely intrinsic and severance perm. Pick whatever unfair strategy you like, and assume it’s a common option when the plan is read. When I read the plan, the likelihood that a debater reads an unfair argument increases, but am I to blame? I think the answer is no, provided that I haven’t defended or justified the unfair argument in any way. It is perfectly coherent to defend and justify the run-of-the-mill plan but not the unfair argument. All the aff needs to say is “Counter-Interpretation: I can read the run-of-the-mill plan so long as I don’t read [the associated unfair argument].” It’s quite common for debaters to read specific counter-interpretations that dodge theoretical disadvantages, and our acceptance of them proves we don’t believe debaters should be stuck defending all the effects of their practices.[6]
Some people have taken the adage “it’s not what you do; it’s what you justify” to mean the effects on “norms” are relevant to a theory debate. That’s wrong. A better phrase for these norm theorists would be “it’s not what you do; it’s what what you do leads to.” But debaters are held to only the effects of their justified practices. They can describe and defend any rule their practice conforms to. Theory really is about what you justify and nothing more.[7]
All this means the second version of the argument isn’t worth anything unless it contains or adds to the first version. An uptick in unfair strategies is relevant only when those strategies are justified by a debater. In many cases, such justification can be avoided.
Debaters, I’m not saying you shouldn’t make this argument, but you should expect smart theory debaters to dodge it, and you’d be better off making the first version of the argument when possible. Judges, the correct interpretation of many variations of “X leads to Y” is probably “X justifies Y,” and we should generally treat it as such.
“You can run theory” is not persuasive in response to “X justifies Y” because the debater making the argument must defend Y. Conceding its unfairness and theory as a recourse is in tension with the original argument. You can’t kick an implication of an argument you’ve made! You have just three plausible options: (1) deny the implication, (2) turn the impact, or (3) outweigh the impact. For “you can run theory” to be useful, it must be impact defense so the impact can be outweighed (option 3).
But the availability of theory does nothing to mitigate the unfairness impact. Imagine a different scenario where a debater reads “Interpretation – debaters may not read skepticism” or “Interpretation – debaters may not read PICs.” Of course, “no impact: you can read theory against it” is not a viable response.[8] It begs the question of the whole theory debate. For this argument to work, we would need to be indifferent between a world without the PIC/skepticism and a world where the argument is read and theory is won against it. But we’d much prefer the former to the latter (assuming the unfairness of PICs/skepticism). There is no or very little intrinsic value to ‘checking abuse,’ so the introduction (or justification) of unfair tactics—even if correctly addressed by theory—is almost always worse than never doing so in the first place. In other words, “you can run theory” is very, very poor defense at best.
If “X increases Y” has an impact—which I dispute above—“you can run theory” is a bad response. On the “increases” version, we care about the frequency of the unfair argument. The fact that a debater can run theory does not address frequency. If the idea is that theory mitigates the unfairness, that’s not true (explained above). If the idea is that theory deters the unfair argument, it still doesn’t really answer the warrant. Debaters read unfair arguments regularly, so the burden is on the respondent here to explain why theory is an especially good deterrent in this case. It would be very difficult to prove without empirical evidence.
The best version of “X leads to Y” is one that proves X entails/justifies Y and X increases the likelihood of Y. This version of the argument is least susceptible to counter-interpretations crafted to escape it and most impactful. Proving “X increases Y” is not enough absent a clear link to something the other debater defends or justifies.
In response to either, “you can run theory” and its variations are not compelling. The existence of procedurals to address unfairness does not mean we should not seek to reduce it.
There are a number of tricky issues in the area, some of which I discuss in the end notes. I’ve been thinking about this post seriously for a few months; I’m not totally sure I’ve got it right, so let me know what you think in the comments or on the Facebook post!
[1] I wrote this example several months ago, but I just saw it hashed out in a CX at Bronx. The aff read the Nelson ’08 card that argues truth testing provides the negative with “functionally infinite ground, as there are a nearly infinite variety of such skeptical objections to normative claims.” The neg asked, “why can’t you read theory on those?”
[2] Reverse the 1NC interpretation, and the argument also seems viable for the neg:
This example could be controversial though because of its ‘altruistic’ character (i.e., it says my interpretation helps you). This is because the role of theory/topicality is generally understood to be about checking abuse; it’s strange to say ‘you gave me a structural advantage’ or ‘you gave yourself a structural disadvantage,’ so you should lose. I won’t stake out a position here, but they are common. Consider, e.g., the plans good justification where affs will say “my plan ties me down, which is good for you so I don’t shift in the 1AR.”
[3] I use PICs as an example because the response “run theory on PICs” is common, and most debaters in these debates don’t contest PICs bad in the line-by-line (though I’d enjoy such a tack). If assuming PICs are unfair is beyond the pale, then every time I write “PICs,” assume I’m saying “NIBs” or some other more plausibly unfair thing. It doesn’t matter much what the allegedly unfair practice is.
[4] Harvard-Westlake (and probably others) read a plan that banned handguns for people convicted of intimate partner violence. On its substantive merits, the aff simply dominated, so teams would go for T “ban means all.” I must have judged the aff against topicality (and coached T against it) a million times that year. One of the go-to arguments for their 2AR on T was tantamount to PICs bad: “if ban means for all people, then neg can PIC out of certain groups, which is worse.” By the end of the topic, they had maybe three or four weighing arguments for why similar PICs were worse than plans. I saw a variation of this same debate in finals of Yale 2018 last month.
[5] Put another way: the neg had the option to include “and debaters may not read PICs” in their interpretation and chose not to. This is unusual but I see no reason it should be disallowed. The neg gains no advantage from PICs being bad since the aff doesn’t violate that part of the interpretation. One could think interpretations must be more closely related to violations or that violators must be able to meet all parts of the interpretation, but I’m not sure why. In any event, even if this wacky argument is bad, the point still stands that “do not specify” justifies a world with more PICs.
[6] A common example occurs when negs read ‘plans bad.’ The aff can contest the description of their strategy as merely “plan” and say instead “aff debaters may read plans that are disclosed, have solvency advocates…etc.” It’s the same move. We could imagine other ways to do theory debate, e.g. where the respondent is stuck to the violation’s description, the practice is described by its perceived effects, or only certain types of “planks” are allowed. I don’t subscribe to any of these, but they’re possible models. If one is worried about the practice, consider the functional limits to it: the more specific the counter-interpretation, the less offense the theory respondent accrues, the more unpredictable the counter-interpretation, and the less likely it is they truly meet it.
[7] I think it’s plausible that the default for theory respondents is to defend the violation, which means there’s little gap between what they do and what they justify. I could see someone thinking the default is to defend something closer to the negation of the interpretation, but that seems to tilt the balance too far in favor of the theory initiator.
[8] This illustration also undermines the notion that truth testing could be fair because it encourages tactics which often lose to theory, putting the non-truth tester in a better position to win than without it.