We recently enjoyed watching the debate between Destiny and Glenn Greenwald on whether the January 6th riots were, in fact, an insurrection. We briefly touched on their argument in our post about graphing arguments, and you can watch the video at the following link:
A key argument between Destiny and Greenwald was whether January 6th was comparable to other insurrections. In particular, Glenn asserted that the death count of January 6th was too low to qualify as an insurrection. Destiny then replied (at 1:01:20) that in the Battle of Fort Sumter, which started the civil war, zero people were killed. Nevertheless, Fort Sumter was classified as an insurrection at the time, and was clearly top-of-mind for legislators in the post-Civil War period who instituted rules against insurrectionists holding public office.
Surprisingly, when Greenwald addressed this comparison, he did not provide reasons as to why Fort Sumter was distinct from the January 6th riots. Rather, he said that Fort Sumter could only be considered an insurrection in retrospect — a counterfactual battle of Fort Sumter that did not result in a subsequent Civil War would not be considered an insurrection. Immediately following Glenn’s statement, Destiny remarks: “you’re such a partisan hack” (1:02:06).
Destiny clearly made an Ad Hominem attack on Greenwald. However, we propose that this is, in fact, a charitable statement. Let’s explore why.
Charity and Fallacy
Identifying fallacious reasoning and applying the principle of charity are two actions that deal with related, but distinct, spheres of communication.
Identifying fallacious reasoning means that you have identified some sort of logical flaw in an opponents argument. For example, if one’s opponent were to say “If P then Q. Q, therefore P”, this would be a classic case of affirming the consequent. Affirming the consequent is a logical fallacy, but there are also informal fallacies as well. For example, if one is arguing that rent control is an ineffective policy solution, one should seek out the strongest cases for rent control. If one instead spend all their time arguing against a 30-follower leftist account with an anime avatar, they are probably committing the strawman fallacy.
The principle of charity is different. Simply put, the principle of charity means one is not “attributing irrationality, logical fallacies, or falsehoods to the others' statements, when a coherent, rational interpretation of the statements is available” [Wikipedia]. If someone claims that drinking water won’t kill you, it wouldn’t be very charitable to bring up hypothetical cases of hyper-concentrated deuterium having negative health consequences.
In ideal settings, these two go together — you can both be charitable to somebody while also identifying logical fallacies in their reasoning. But sometimes these two actions are at odds with each other. Sometimes an Ad Hominem attack is the most charitable thing you can do.
Greenwald’s statement — that Fort Sumter was not an insurrection — is an incredibly bespoke legal opinion held by virtually no scholars in the field. Indeed, we’ve heard on good authority that Fort Sumter is often presented as *the* case of insurrection in law school. On its face, Greenwald’s denial of Fort-Sumter-as-Insurrection is 1) irrational, 2) fallacious (special pleading), and 3) probably false. But there is a coherent and rational interpretation of the statement available: Greenwald doesn’t want to admit that his standard for not-an-insurrection (“nobody died!”) fails when considering a widely-acknowledge and uncontroversial case of insurrection.
In short, calling Greenwald a partisan hack for his statement was one of the more charitable things to say in response. It would be insulting to call him stupid — he clearly isn’t — and his statement is sufficiently unhinged such that it is more likely he is committed to it for partisan reasons rather than seriously endorsing such a fallacious line of reasoning.
If the point of the principle of charity is to facilitate productive conversations, then correctly identifying one’s opponent as a “partisan hack” is very productive indeed. If Destiny genuinely believed Greenwald was unfamiliar with the facts of the Fort Sumter case or unaware of the legal and historical consensus of its insurrection status, then the conversation could be approached quite differently. By correctly identifying Greenwald as a partisan hack in this instance, Destiny can instead focus on other substantive issues.
So what?
Most of the content we consume comes from rationalist and rationalist-adjacent spaces. We grew up reading LessWrong blogs, SlateStarCodex, and now enjoy several of the rat-sphere blogs and podcasts like Tracing Woodgrains, Yassine Meskhout, and the Bayesian Conspiracy.
Something we’ve noticed over time is that these blogs used to be much more naive (charitable translation: much more trusting). The LW community was, by and large, bought into the notion that most disagreements were differences of language or simple failures of reasoning. This culminated, in our estimation, in SSC’s famous “Conflict vs Mistake-theory” post, which made this reasoning more explicit. In short — people think that disagreements are either conflicts (wherein power determine who wins) or mistakes (wherein people who disagree with us are simply mistaken).
More recent blogs in the space, as well as older blogs that have continued on (like AstralCodexTen) appear to have shifted somewhat in the direction of conflict theory. Not all the way — but are perhaps a little more cognizant of how bad-faith actors can leverage commitments to not considering the person making an argument, and trapping people in a loop of simultaneously 1) not wanting to be uncharitable, but 2) also not knowing how else someone could make a point so clearly wrong.
Our feelings on the whole issue are pretty well-encapsulated by this rant from Destiny about talking to vaccine skeptics:
“I understand. Fauci and the CDC did get some things wrong. And it’s really bullshit that some of the media has a bias, and I totally get how you could have a mistrust in government when they act in the ways that they do. When they act kinda smug, when the politicians and the media act in the same kinda way. I understand the frustration there. And then you’ve got other figures like Joe Rogan who are willing to platform other voices who are unpopular, platform voices that don’t get platformed as much. When you hear these people talk, you have the inclination to trust them a little bit more, yeah…
I DON’T BELIEVE ANY OF THAT. I hate that, I hate doing it. What I really want to say is ‘oh, you think these are good drugs? let’s look at the studies’. Oh, wait, you’re fucking [redacted]. That’s it. You don’t have RCTs to support Ivermectin or Hydroxychloroquine. You’re a fucking moron. You know what? You’re so triggered by a 92-year-old limp-dick Fauci going up on TV talking that you’re willing to eat any fucking pill that a meathead like Joe Rogan will tell you to eat…. but I can’t say that, I have to talk to you like you’re a fucking triggered 5 year old”.
There’s a clear tension here, and we don’t mean to advocate for a total pivot to conflict theory. Rather, we started out as staunch mistake-theorists and have simply moved closer to the centre on the whole debate. Being nice to people who are clearly wrong (and explaining your reasoning) is of course a good thing, but there also needs to be a point at which you recognize that many of the people expecting niceness from you have all the epistemological nuance of a bull in a china shop.
I definitely admit to having been on this journey, or at least one very similar. I've always been fully aware that people lied and otherwise acted in blatantly dishonest ways, but I assumed the optimal approach remained the same as if they were acting honestly. I still think that's *mostly* true, but over time I realized just how much of a wasteful time sink that approach is. VERY similar to the evolution vs creationism debates from 20+ years ago, there is limited value to earnestly debating someone who can only pantomime what logic would look like.
That's not to say that conversations should be foreclosed, but right now I'm interested in formulating a much better detection system. Two of the filters I'm contemplating are asking someone:
1) to identify the weakest part of their argument
2) articulate their position's falsifiable state
If they refuse either ask, I'd say that's good evidence that they're an implacable fundamentalist who is immune to reason. It's hard for me to think of a scenario where this filter would net a false positive.
<insert quokka picture>