God-Draw Modern: a Thought Experiment

Sometimes, you just get a perfect draw.

Sometimes your opponent plays a Burning Inquiry, discards 2 Vengevines and a dud, then runs out 3 Hollow Ones and leaves you staring down 20 power on the first turn, 8 of which has haste. Every once in a while, you mulligan in standard and draw 2 Mountains, a Ghitu Lavarunner, a Runaway Steam-Kin, a Light up the Stage, and a Wizard’s Lightning, and you just know your opponent won’t get a third turn. I was playing a Vintage tournament online once, mulled down to four, and got a hand of a land, Black LotusNecropotence, and Duress. I didn’t lose that game.

We love talking about these hands, hearing about them, and most of all drawing them. The problem is, they’re not very common. In fact, those perfect draws are so rare they’re nicknamed “magical christmasland”. Sure, there are hands you could draw that will win the game on the first upkeep, and one time out of a million, you’ll get an incredible story. But the other 999,999 times, all you’ve got is a crappy deck full of mediocre cards and no way to beat that turn-1 Thoughtseize. The sad truth is, the craziest god draws are so inconsistent, you could never build a deck that would get them with any reliability.

But what if you could? What if, when you built your janky glass cannon, not only would you get your perfect draw often enough to enjoy it, you got it every single game?

What if everything always went right?

Today, we will test the limits of what you can do with a perfect draw. We will see broken opening hands clash with broken opening hands. We will explore the diverse strategies you might bring to bear in a tournament with these rules. And we will see the rough outline of a metagame take shape.

1. The Rules

I’m using the Modern cardpool for this experiment. Standard is just too boring, and the eternal formats have a few too many moving parts. Modern is also famous for being full of broken glass cannons, so when it comes to busted draws, it’s a natural fit.

The banned list still applies, of course. Surprisingly, I don’t think that would make much difference, but it’s there. There’s still a 60 card minimum and the 4-of limit still applies. Every game played in this experiment is theoretically possible in the next Modern tournament you go to.

There is only one change: you control all randomness created by your deck. You decide every card in your opening hand, and what card you draw. Your only limitation is that you can’t go into your deck and change the order in the middle of the game. Cards in your library don’t spontaneously swap places in-game. Rule of thumb: you get to reorder your library however you want every time you would shuffle it, including before you start the game. If you want to reorder your library in response to something your opponent does, you have to get a shuffle effect.

There is one exception to this: in Magic, you decide who goes first before you draw your opening hand. That means that you can change you hand based on whether you’re on the play or on the draw. Neither of these rules are crucial, but they will be important later on.

Lastly, you’re not the only one with this ability. Your opponent also gets their god draw. They can use their hand either to counter what you’re doing, or to race you with their own broken strategy.

Enough chit-chat. On to the lists!

2. The Broken Stuff

We’ll start by looking at the first strategy I thought of, and probably the cleanest one. Your opening 7 would look something like this:

DrossDrossDrossDross SpikeSpike

On the beginning of the first upkeep, you reveal four Chancellor of the Dross from your opening hand, and get four triggers. Then, either after they resolve or in response, you cast the two Soul Spikes, pitching two Chancellors to each one. The Chancellor triggers drain your opponent for 12, and the Spikes cover the remaining 8. Your opponent takes 20 before the first draw step. You even have a spare spot for some form of protection like Pact of Negation. Easy peasy.

The problem is, this kill is incredibly fragile. Any interaction, either opposing Soul Spikes, a Commandeer, or a Chancellor of the Annex, will fizzle it. And all of them require different tools. Soul Spike and Commandeer need a counterspell, but Chancellor needs either a mana source or a 0-mana spell it can use to absorb the trigger before casting the Spikes. But more importantly, it has no answer to a single Leyline of Sanctity.

It’s also even weaker on the play, for two reasons. The first is Force of Negation.  To understand the second, we need to talk about APNAP. APNAP stands for “Active Player, Non-Active Player”, and it’s an obscure rule determining the order that triggered abilities go on the stack, when multiple abilities from different players all trigger simultaneously. The Active Player (usually, the one whose turn it is) goes on first, then the Non-Active player goes on top of that.

In other words, if both players have abilities triggering on the first upkeep, the player going second’s ones resolve.

Imagine you’re playing Dross-Spike, but your opponent also has a Chancellor of the Dross in hand. If you’re on the draw, your Dross triggers resolve before your opponent’s. Then, in response to their trigger, you can cast the Soul Spikes, killing your opponent before they get their upkeep abilities. But if you’re on the play, your abilities go on the bottom. You can still get the Soul Spikes first, but the opponent will survive to resolve their Chancellor trigger.

There is no room for error in the Dross-Spike hand. it deals exactly 20. If the opponent is on the draw and has a single Chancellor of the Dross, they won’t die, you’ll have spent your whole hand bringing them to 3, and they’ll still have 5 cards to kill you and fizzle whatever you topdeck.

APNAP, along with the power of Gemstone Caverns and the force cycle from Modern Horizons, means that most of the time it’s better to go second than to go first in this format. There are very few exceptions. The next hand is one of them.

DrossDrossDrossCavernNeonateImp

Like the Dross-Spike hand, this version starts with some Chancellor triggers. This time, you only need three, putting your opponent to 11. Then, come the main phase, you play Cavern of Souls and tap it to cast Insolent Neonate. You sacrifice the Neonate, discarding Stinkweed Imp and dredging. You dredge up a card of your choice (probably another Stinkweed Imp, for durability), and four copies of Creeping Chill. They each trigger, draining your opponent for a further 12. They take 21 and die.

While this version is more convoluted and sorcery speed, it’s resilient to a lot of the hate you’d throw at the Dross-Spike combo. Neither Creeping Chill nor Chancellor’s trigger target, so this version gets around Leyline of Sanctity. The only spell you have to cast, Insolent Neonate, is uncounterable, so all countermagic is irrelevant. It’s also a creature, so Commandeer won’t do it either. And while it’s too slow to protect you from Dross-Spike on its own, it has an extra spot for a Leyline of Sanctity. When the dust settles, it will still have a Stinkweed Imp in its graveyard, allowing it to set up for a finishing blow with Prized Amalgams or something.

At the same time, it has its own Leyline problem: Leyline of the Void. No graveyard, no dredging. And it’s not like you can hope they mulligan to oblivion looking for it.

Is there any kill that isn’t so easily disrupted? Yes and no. Allow me to introduce you:

IslandTangleRiderNeo

There are a few different variants of this hand, of course, depending on whether you’re aiming for Eldritch Evolution or Neoform, but those are minor details. In general, you generate some mana with Chancellor of the Tangle, then pitch it (and a fifth green card) to cast Allosaurus Rider. You then immediately sac it to Neoform, and search out a Griselbrand. You pay 7 life and draw seven cards, two of which are Autochton Wurm and Nourishing Shoal (the rest can be free countermagic, disruption, etc.). You use them to gain 15 life, which lets you draw fourteen more cards. You keep going like this until you draw your entire deck, generate some mana, cast a Laboratory Maniac and some cantrip (or just activate Griselbrand again), and win.

This kill never targets the opponent. It bypasses the graveyard completely. And where the other two hands had six mandatory cards and only one slot for disruption, this one has two. On the draw, it can play a Chancellor of the Dross, leaving it at 3 life vs Dross-Spike and 2 vs Dredge, and Pact of Negation, drawing a SECOND Pact on its turn for further protection. On the play, it can run Leyline Sanctity and Pact of Negation, and just kill the dredge player before they can go off. No matter what card the other strategies use to fill their seventh slot, this deck can beat it. And because it has two slots for protection in general, it’s that much harder to disrupt it through other means.

Harder, not impossible. Two slots are enough to stop both Dredge and Dross-Spike kills, but they aren’t enough to protect the combo from a full range of disruption. Neoform and Evolution are both tempting targets for Force of Negation (which is always live against you, since you can only combo off at sorcery speed), and while you can run two Pacts, your opponent can run three Forces plus fodder. Chancellor of the Annex requires its own dedicated tool (Ornithopter, which can eat all the triggers and clear the way for more important spells), which means that even on the draw, you can’t beat an opposing hand of Chancellor of the Annex and three Force of Negations. And of course, there’s always Flusterstorm, playable through Gemstone Cavern if you’re on the play or naturally if you’re on the draw.

While there are many, many other viable turn-1 kills, I don’t believe any of them can match the effectiveness of the ones I’ve laid out here, and I’m certain that none of them can guarantee a win. There are simply too many hoops to jump through. Bear in mind that a guaranteed kill:

  • Can’t involve targeting your opponent
  • Can’t use the graveyard
  • Can’t expose any spell to countermagic, particularly noncreatures.
  • Must either go off on the first upkeep, or have space to disrupt the Dross-Spike combo AND not care if the opponent is on 32 life from Dross triggers.

You can avoid some of these restrictions, but every one costs slots. To target your opponent or use the graveyard, you need the ability to remove multiple enchantments, through countermagic, before starting your combo. You can buy yourself time to play a land, but then you need to always have room for a Leyline of Sanctity. With a four-card kill, you might be able to win a counter war, but only if you can avoid a Flusterstorm. And if you can manage all that…

You’ll just run into other, more involved answers. It takes a lot of slots, but Leyline of AnticipationLeyline of LifeforceSimian Spirit Guide, and Wild Cantor let your opponent play anything from Angel’s Grace to Trickbind. At the end of the day, Magic is an interactive game. You can find as many angles of attack as you want, but there will always be a countermeasure.

3. Playing Defense

While I couldn’t find a guaranteed combo kill, I did find ones that required very different answers. The Dross-Spike plan is vulnerable to countermagic (though you can’t rely on Force of Negation since you might be on the play) and Leyline of Sanctity. Both of those are useless against the dredge kill, which is only vulnerable to Leyline of the Void (and a few other tools, like Ravenous Trap, which aren’t any better). The Griselbrand combo is susceptible to countermagic, but not to either of the Leylines. You also need at least three counterspells, or Flusterstorm and a way to reliably cast it on both the play and the draw.

That seems like a lot, but I’ve found a configuration that can shield you from all of them. This is where the rules of deck ordering comes in: the cards you use change depending on whether you’re going first or second.

If you’re going first, you use:

SanctityLeyline of the VoidFlusterstormIslandAnnex

And if you’re going second, you use:

Leyline of the VoidGemFlusterstormAnnex

Note that, even though Gemstone Caverns requires you to exile a card, you can reveal Chancellor of the Annex and then exile it, so on the draw you only use four cards. On the play, you have two cards in your opener with which to win. On the draw, you have three, plus the card you draw on the first turn.

If you’re on the play, the two Leylines do most of the work. Sanctity means the Dross-Spike character can’t Spike you, so you wind up at 8 life, your opponent at 32, but you have a five-card hand and they have six useless duds. The Void prevents the dredge player from going off, so you wind up at 11 life, while your opponent is at 29, has a 1/1 Menace and a Cavern of Souls set to vampire, and is drawing nothing but Creeping Chills for the next several turns. And against the Griselbrand deck, Chancellor means they have to use one of their three (remember, they draw up to eight) free slots to either pay for the Chancellor or play a free spell to absorb the trigger. Either way, when you Flusterstorm their tutor spell, they’ll have only 2 cards against a Flusterstorm with a storm count of at least 3. There’s no combination of cards they can use to push past that.

On the draw, you can’t rely on Flusterstorm alone to save you from Griselbrand combo. You need a Gemstone Caverns to provide the mana. On the bright side, the caverns can replace not just your Island, but also Leyline of Sanctity: you can wait for the second Soul Spike, then counter it with Flusterstorm. They have to use their seventh card to get around your Chancellor, so this deterministically beats them. You wind up at 4 life, with one land on the field and four cards in hand, against an opponent who has 1 card left on both the hand and battlefield.

Once the mayhem of the first turn passes and both players enter the midgame, card and mana advantage become hugely important. Because both players have such good selection, and because you’re almost guaranteed to dump most of your hand fighting over turn 1, each new card you draw is hugely impactful. Likewise, players with better mana selection later on have a huge advantage, since they don’t have to draw more lands to unlock new spells. When the first turn is over in these matchups, the control deck has only one land and two cards in hand. But I would give it a 99% chance of winning those games, every time.

It is not, however, the perfect strategy. It can still be exploited, just not on the first turn. Imagine, for instance, a start like this:

Cavern Hierarch

Let’s say our control player is on the play. They’re defending against a strategy that doesn’t abuse the graveyard, doesn’t try to target you, doesn’t rely on instants or sorceries, and just ignored the Chancellor trigger. Four of the cards in their opener are dead. After their land, they have two cards in hand and whatever they topdeck to deal with an optimized Humans draw.

Of course, there’s no way for the humans deck on the draw to protect itself from all the glass cannons. Even if it runs both Leylines in its opener, it still needs a way to deal with the Neoform combo through a Pact of Negation. At a bare minimum, that will take two more cards, at which point our tempo player is rapidly running out of slots to curve out with. In order to exploit the defensive posture of the control strategy, you have to open yourself up to the glass cannons.

And just like that, we have a metagame, with no less than four distinct archetypes:

  • The control strategy knocks out the Chancellor of the Dross-based glass cannons and Neoform, but loses to tempo and aggro strategies that exploit its defensive stance.
  • The tempo and aggro strategies have a leg up against control, but struggle against either the Dross combos or Neoform, depending on which one they underprepare for. Chances are this is Neoform, because it’s the hardest to fight for them.
  • Neoform combo beats the Dross combos and probably the aggro/tempo strategies, but loses to control.
  • The Dross combos are lower-tier, but whenever other players skimp on their Leylines, they suddenly dominate.

Note that I say “strategies”, not “decks”. That’s because there’s nothing stopping you from playing several of these at once! With 60-card decks, you can always run four Chancellor of the Dross and two Soul Spike, and just leave them on the bottom of your deck when you don’t go for that combo. Likewise for Dredge, or even the Neoform strategy. This implies something unique about this format: it’s likely that it both has an unquestioned top deck that dominates all others, AND has a thriving and vibrant metagame with aggro, tempo, control, and combo all represented! You just decide which one you are at the start of each game, rather than at the start of each tournament.

Conclusion

God-Draw Modern does not look like a fun format. Based on this experiment, every game would end on the first turn. The perfect draws transform slightly-favored matchups into curbstomps, and crawling back from even a minor disadvantage is almost impossible.

And yet, even in this crazed experiment, it’s not just a goldfish race. Before the first turn begins, both sides already start interacting and deploying competing answers. There is a metagame with no clear best deck. And even in this world where decks can goldfish on turn 0 with 100% consistency, that metagame is defined but by how much the fair decks prepare for the mirror, not the combo decks they beat on. You can change fundamental rules of Magic: the Gathering, and it will still have multiple viable strategies.

Of course, I’ve probably missed something. If you have a better line that flips a matchup, or if you think you’ve found a deterministic win, drop a comment and let me know. If somebody finds something that obsoletes this meta, I’ll write an update.

Until then, may your openers be as good as the ones in this post!

 

All About Pete

I expected better of Current Affairs.

Current Affairs is a premier socialist magazine in American politics. Its commentary is supposed to be nuanced, insightful, and honest. It’s supposed to tell the truths the smarmy neoliberals are afraid to confront.

Which is why I am so disappointed in this takedown of Pete Buttigieg that I recently read on their site. It is not nuanced. It is not insightful. But above all, it is not honest. It alters direct quotes to make him seem less genuine. It makes claims its own sources contradict. It uses the same tired tropes conservative politicians use to push sexist attacks on progressive politicians. Sometimes, it just makes shit up.

To clarify: I am not planning on voting for Pete Buttigieg for President. There are legitimate criticisms you can make of him and his campaign. Some of them are in the Current Affairs piece, and I won’t be covering those. Those are fair play.

I’m not writing this to protect my chosen champion. I’m writing this because I don’t like liars.

1. Abusing Ellipses

Current Affairs wants you to think Pete is not a real progressive. They want you to see Pete as a self-obsessed, privileged, centrist establishment candidate who doesn’t care about the real issues.

And when you read a quote from Buttigieg’s book like this one, it’s hard not to think they have a point:

“Striding past the protesters and the politicians addressing them, on my way to a “Pizza and Politics” session with a journalist like Matt Bai or a governor like Howard Dean, I did not guess that the students poised to have the greatest near-term impact were not the social justice warriors at the protests […] but a few mostly apolitical geeks who were quietly at work in Kirkland House [Zuckerberg et al.]”

It seems like he’s blandly dismissing those silly socialist SJWs here. They aren’t accomplishing anything. It’s Zuckerberg and the technocratic establishment who get the job done!

There’s just one problem: that little “[…]” in their quote. I’m not against using brackets like that to make a block quote more readable, if what you’re cutting from the quote is irrelevant to the subject. But it turns out that when you read the actual passage, bracketed-out part and all, the meaning of the sentence is very different. From Buttigieg’s book:

“Striding past the protesters and the politicians addressing them, on my way to a “Pizza and Politics” session with a journalist like Matt Bai or a governor like Howard Dean, I did not guess that the students poised to have the greatest near-term impact were not the social justice warriors at the protests, NOR THE MORE BUTTONED UP TYPES I WOULD FIND AT THE INSTITUTE OF POLITICS, but a few mostly apolitical geeks who were quietly at work in Kirkland House [Zuckerberg et al.]”

Emphasis mine. If Buttigieg is dismissing the Social Justice Warriors here, he’s also dismissing the establishment politics jockeys Current Affairs wants us to believe is Buttigieg’s tribe.

But that phrase doesn’t fit the narrative. They want “Socialist good, Buttigieg bad”. So they cut the line where Buttigieg throws shade at the political establishment on campus, and hope that you don’t notice.

Oh, and you see the link they have on Howard Dean’s article, the one that claims he’s an insurance lobbyist who opposes single payer? I have no idea what they’re talking about.

This isn’t the only time the article is dishonest in reciting direct quotes. Later, they quote an interview he gave to New York Magazine, where he was asked about his consulting work with McKinsey & Company (note: McKinsey has been in the news a lot lately for an incredibly long list of awful misdeeds, most of which seem to have happened years after Buttigieg left). In the article:

“Buttigieg even became a bit defensive, suggesting that consultancies might be singled out arbitrarily for ethical judgments:

You don’t see blanket denunciations of law firms that serve any number of these clients, because the thought is just, client service is what it is. And you serve people and represent their interest. But there seems to be a higher expectation of consultancies.”

Current Affairs is right: it would be a weird point to make that we are hypocritically arbitrary in condemning consulting firms but not law firms. So it’s a good thing that’s not what Buttigieg was saying at all. The full quote:

“But there seems to be a higher expectation of consultancies, and it may be because consultancies take a lot of pride in the work that they do with foundations and other great causes. They don’t want to be as amoral as a law firm.”

That’s not arbitrary. He gave a distinction between consultancies and law firms, that one claims a moral high ground and the other does not. They cut him off mid-sentence.

I should also mention the sentence immediately before what they quoted:

“But, in the end, when you have an apparatus like that that is so woven into the American private sector, it’s going to be as moral or immoral or amoral as the American private sector itself.”

Spoken like a true corporate stooge.

As a general rule, if you find an ellipsis in a block quote in this article, it’s hiding something that directly contradicts the point the author is trying to make. Take this passage on why Buttigieg decided to run for mayor in the first place:

“The reason to run—the ideal reason to seek any job—was clear: the city’s needs matched what I had to offer. The city was fearful of losing its educated youth, and I was a young person who had chosen to come home and could encourage others to do the same. Its politics were mired in the struggle between two factions of the Democratic Party, each with its own candidate in the race; I belonged to no faction, and could arrive without strings attached. … This didn’t just feel like an opportunity; it felt like a calling.”

It sure sounds like he’s a vapid, self-obsessed politician with no interest in what communities actually need. But before the last sentence, there’s that dreaded ellipsis. What secrets is it hiding?

“And as the administration struggled to generate economic growth and maintain confidence in the business community, I had a professional background in economic development and was fluent in the language of business-even while having fought and bled politically for organized labor in the auto industry.”

Ah, so that’s what he was offering the city of South Bend. Pro-worker business acumen. They’ve also cut any mention of his union work, because otherwise their readers might be wondering why this corporate neoliberal spent so much time fighting for labor.

These are not the only examples, but if I went through every single time they misquoted Pete Buttigieg, I’d hit a word limit somewhere. But I shouldn’t have to. One misquote like this is unacceptable in a high-quality publication. Three is borderline journalistic malpractice.

And we’re only getting started.

2. Cooking the Books

The article portrays Pete Buttigieg’s time as mayor as a technocratic failure, one where he invested time and money in WiFi-enabled manhole covers and ignored real problems like homelessness and evictions, and one where he reliably ignored or used community leaders of color.

The sources they themselves cite tell a different story. Take the coverage of Buttigieg’s “1000 homes in 1000 days” initiative. When he first took office, South Bend had a problem with lots of dilapidated, abandoned, and unsafe homes. So he set a goal: they would demolish, or repair, 1000 of those homes within 1000 days. And they did. Buttigieg lauds it as one of his signature accomplishments.

Current Affairs sees it differently:

“But news coverage of the plan makes it sound a little less savory:

By leveling fees and fines, the city leaned on homeowners to make repairs or have their houses demolished. In many cases, Buttigieg said, the homeowners proved impossible to find amid a string of active and inactive investment companies. In other cases, he said, they were unwilling or unable to make repairs.

Make repairs or have your house flattened? Wait, who were these people who were “unable” to make repairs? Were they, by chance, poor? Also, how did these houses become vacant in the first place? Were people evicted or foreclosed on? Look a little deeper into the coverage (NOTE: this links to the same article they cited earlier, so apparently “look a little deeper” means “read the next paragraph”) and you’ll find that this was not simply a matter of “efficient and responsive government,” but a plan to coerce those who possessed dilapidated houses into either spending money or having the houses cleared away for development:

Community advocates in poorer, often African-American or Hispanic neighborhoods began to complain that the city was being too aggressive in fining property owners over code enforcement. The city leveled fines that added up to thousands of dollars, in certain cases, to pressure homeowners to make repairs or have their houses demolished.”

Yikes. That certainly doesn’t sound good. It would be horrifying if Buttigieg had sat back and let his flagship program demolish the houses of black residents.

Which is why it’s a good thing that, according to the same source they just cited, “no one lost the homes in which they were living and the city made every effort to reach and work with homeowners.”

It’s true, there were issues with the initial rollout of the plan. But the city government immediately took steps to address the problems. From the same article: “South Bend began providing more incentives for people in poorer areas to fix up their homes, she said, including a $2 million grant program for home repair and a $2 million program to provide affordable housing.”

The “she” in that quote, by the way, is Regina Williams-Preston. She’s one of the black community organizers who originally disapproved of the program. She had this to say about Buttigieg:

“I think it’s really a mark of a true leader to hear that maybe ‘I’m doing something wrong’ because, quite frankly, there were a lot of mistakes made,” said Williams-Preston, who is running to replace Buttigieg as mayor. “But what happened was he did slow down and he did listen and he did change course. And so many people would dig in their heels and just kind of keep going.”

Current Affairs seems incapable of talking about Buttigieg’s record in South Bend without obviously lying. They claim Buttigieg did nothing to combat the gentrification problem. From the same source they “looked a little deeper into” earlier, another black community leader named Stacey Odom describes a time she approached Buttigieg about a redevelopment program that would push out black tenants in the area, and asked for $300,000 to help local inhabitants keep their homes instead. He cancelled the redevelopment plans and offered her $650,000 instead.

“That’s the kind of person you want in office,” she said. “Someone who is looking at your best interests. And if they’re not (at first), if you go to them and tell them what your interests are, then they will take your concerns and make them their concerns.”

Then there is the charter school issue. Apparently, “A charter school company (“Success Virtual Learning Centers”) is trying to introduce one of those most hellish of things, the “online charter school” where students sit in a bare room all day being taught by a laptop instead of a teacher.” And Buttigieg has said nothing on the subject.

If you read the source, what’s actually happening is, a Michigan-based charter school group (which, to be fair, kinda sounds like a scam) started looking into opening one in South Bend last month. They had a preliminary meeting with the school district superintendent scheduled for late March.

They haven’t opened the school yet. They haven’t gotten any agreement from local officials. Nothing has actually happened. Apparently, “someone unpleasant is thinking about moving to South Bend” is now a ding against Pete Buttigieg.

That’s not to say there’s no legitimate criticism you can make against Pete Buttigieg. But that’s no excuse for padding your hit piece with blatant lies.

3. Character Assassination

It’s not enough for Current Affairs to convince you that Pete Buttigieg is light on policy and heavy on woolly “character” talk. They also malign that character.

It’s true that a lot of Buttigieg’s appeal right now comes from his folksy, midwestern vibe. But in this article, that brand is described as hollow and calculated, a political ploy inspired by focus groups and not the real Pete.

I don’t know Pete Buttigieg. I can’t tell you the deepest truths of his heart. But I can tell you when this article lies about them to make you think this man you’ve never met is an asshole.

The assassination begins with his childhood. Buttigieg describes his early years as a kind of small-town experience, which led to quite the culture shock when he left South Bend for Boston. Current Affairs takes issue with that: “So, even though he grew up on the campus of a top private university 90 minutes from Chicago, the Boston subway amazed him.”

Yes. It would. 90 minutes’ drive is a long way. 90 minutes will get you from downtown Boston to Connecticut. I’m sure he visited a few times, but Pete Buttigieg definitely didn’t spend his weekends riding the L to hang out with his friends. They’re arguing he’s a closet city boy because he lived the width of Belgium away from a major metropolitan area.

From there, they move on to his politics. I mentioned earlier that they misquoted Buttigieg’s paragraph on the Harvard living wage protests, but they also use this passage to claim that to Pete, “Activists are an alien species, one he “strides past” to go to “Pizza & Politics” sessions with governors and New York Times journalists.”

This is hard to reconcile with what they write only three paragraphs later: “Buttigieg takes pride in the fact that at a rally outside Harvard’s Science Center, he argued that the Iraq War did not meet the criteria for a “necessary war,” though he was convinced Saddam Hussein possessed weapons of mass destruction.”

Apparently, you can be giving headline speeches at anti-war protests, and Current Affairs still won’t believe you’re an activist.

At least it’s better than misrepresenting black community members in South Bend to pretend that they somehow had beef with Pete Buttigieg. They cite a passage in his book where he strugged to answer a black union leader when asked how they could know he wasn’t lying to them about what he would do in office. They describe the exchange as Buttigieg having no idea how to respond, as though this union leader “got him”. They leave out the sentence right after their quote revealing that he actually was satisfied by the meeting and the Fireman’s union endorsed Buttigieg.

That union leader’s name, by the way, was Kenny Marks. I got that from Pete’s book, which is surprising since the article told me that in the memoir, “the people of Kabul appear as anonymous pieces of scenery. (In this respect they are like the Black people of South Bend or the homeless people of Harvard Square: nameless nonentities whose opinions Buttigieg has never sought.)” In fact, all I’ve read of this book is the sample sections I can coax out of google, and I found a pile of minority community activists mentioned by name.

I couldn’t find any in the Current Affairs article. Perhaps because every quote I could find from actual minority community leaders in South Bend contradicted what the article was saying.

As bad as that is, it’s not their lowest point. The lowest point is in the section “Unity through Vacuity.”

“But here’s a fact about Pete Buttigieg: He picks up languages quickly. He already speaks seven of them, and you can find stories online of him dazzling people by dropping some Arabic or Norwegian on them. The lingo of Millennial Leftism will be a cinch for Pete. He will begin to use all the correct phrases, with perfect grammar. The question you should ask is: What language has he been speaking up until now?”

If Pete Buttigieg makes good points, supports progressive positions, says and does things that blatantly contradict everything I’ve said so far, don’t believe him! He’s just pandering by using your rhetoric!

This is gaslighting. They want you to disregard your own perceptions of Buttigieg as you get to know more about him, and trust what they’ve been saying about his record instead. It’s sick.

The writers at Current Affairs should know better. Accusations that a lefty politician will use your rhetoric and speak your language without meaning it, that all he cares about is “advancing Buttigieg himself to the next rung of the political ladder”, aren’t new to politics. They’re the same tactics conservatives have used to attack progressive politicians, particularly women and minorities. Alexandria Ocasio-Cortez uses verbal blackface to pander to the black voteElizabeth Warren doesn’t care about you, she just wants to be president.

It’s an effective tactic. That’s why they keep doing it. But I would have hoped we on the left would have more of a conscience.

A Quick Digression On False Modesty

At one point, the writer discusses a time Buttigieg misused the term “false modesty”, or ” the insincere performance of modesty by an egotistical person”, conflating it with plain old “modesty”. He uses this as a metaphor for how Buttigieg doesn’t understand false modesty, because he is the embodiment of the term. He can’t tell the difference between it and genuine modesty, because false modesty is all he knows.

He’s wrong. False modesty is when you act modest about something you’re actually good at. There’s nothing inherently egotistical about it. Pete’s usage was correct.

4. Who Watches the Watchers

Sadly, this article is part of a pattern I’ve found at Current Affairs.  When I read one of their pieces on a topic I’m not well-versed in, architecture, say, I’m astonished by how nuanced, well-researched, and in-depth their content is. I come away having learned something, and feeling like I have new insights into the world around me.

But when I read an article on something I already know a lot about, I’m often apalled by how much they misrepresent the truth or blatantly take things out of context.

That is not a good sign. I started combing through the archives, trying to find the source of the problem.

Which brings us to Nathan J. Robinson. He is a PhD student in Sociology at Harvard, and the founder/editor-in-chief of Current Affairs. He’s also written or co-written more than half of the articles the magazine has published since February. One of those articles was “All About Pete”.

That’s unusual for a prestige magazine. Jeffrey Goldberg, the editor-in-chief of The Atlantic, published one article in March, a note detailing his reasoning for running a cover piece by Yoni Appelbaum. Before that, his last piece was in 2018. Vox’s editor-in-chief hasn’t written any opinion or analysis since she got the promotion.

Journals do this to avoid potential conflicts of interest. Before an article goes live, it gets revised by an editor. That editor needs maximum leeway to require changes, cut sections that aren’t supported, or pull the story if necessary, without any fear of retribution. That leeway is impossible when the writer they’re critiquing is their boss.

Robinson isn’t just writing cover stories for the magazine he chairs. He’s providing close to 60% of the content. Who is editing that content? Who has the authority to require changes if he accidentally misrepresents a source? And if this callous disregard for the truth proves to be systemic and intentional, who has the power to tell him that his writing is not Current Affairs standard?

There is a word for an online platform where one person shares their views on various topics of the zeitgeist, without meaningful editorial oversight. It’s called a blog.

Conclusion: Don’t be Evil

We’re better than this.

We’re the adults in the room. That’s not much of an honor when our president is a racist who misspelled “tap”, but it’s ours. In a world where racism is resurgent and global warming is gearing up to kill us all, we’re the ones with the ideas for how to fix these problems.

We should be able to manage criticizing a politician without lying about them. We should be able to vet our presidential candidates without using black activists as props for an argument they don’t agree with. Neither of these things is hard to do.

This isn’t about whether Pete Buttigieg is a good candidate. Like I said at the beginning, I’m not voting for him. I support Warren. This isn’t about shutting down legitimate criticism of political leaders. There’s a reason I haven’t slammed the article for claiming Buttigieg is light on policy or too soft on our military. I want us to have a vibrant debate about who our next president should be. I want these guys to be vetted.

This is about honesty. The misinformation in this piece devalues everything else Current Affairs has ever published. If you think I’m being too harsh on the magazine here, bear in mind that journalists get fired for mishandling quotes the way this article did, and when they don’t, it’s a huge scandal.

You can dislike Buttigieg. You can decide not to support him. And you can do it without lying.

We’re the adults in the room. There’s something to be said for acting like it.

When English Teachers try to Science

It’s not often that I get mad at an article.

I can get angry at the content. In this day and age, it’s hard not to. I might even get outraged at the arguments they’re making. But I’m not angry at the article itself. I’m mad at what it’s saying, which is similar but not quite the same.

Today, I want to talk about one that left me furious.

Two weeks ago, I was browsing Quillette magazine, the epicenter of the Intellectual Dark Web. For the uninitiated, the I.D.W. is a haven for free thinkers, academics, and intellectuals who feel drowned out or left behind by the stifling PC discourse of our universities. This leads to some… uneven content. Its best is extraordinary. Its worst is abysmal.

When I came across Myles Weber’s article “When a Question of Science Brooks no Dissent”, I thought it was one of the good ones. Quillette writers have done amazing work critiquing the way we approach science today, and how it’s enabled things like the Replication Crisis. I was excited to read this. I was hyped.

Within two paragraphs, that hype turned to bitter disappointment, as I realized I was actually reading a perfect example of the Intellectual Dark Web at its worst.

Before I go over this piece, I encourage you to read it yourself so you know I’m not misrepresenting it. It looks long when you click it, but most of the page is comments.

Speaking Power to Truth

The thesis of Myles Weber’s piece is that there is an unjustifiable degree of climate alarmism in academia and on the left in general. He argues that we have shirked our duty to question the supposed consensus on the impact of global warming, and that many people who should know better have absolutely no idea how any of the science works. He thinks that his fellow professors “forget what our job is: Not to tell the students what to think, but rather to teach them how to think for themselves.”

That’s a serious accusation, especially since he levels it at earth-science professors and his department’s “self-proclaimed expert on climate matters”. And he describes some astounding anecdotes of the scientific illiteracy of his peers. He argues that their devotion to global warming alarmism shows a lack of intellectual curiosity. He argues that his fellow professors are passing that on to our students.

I can’t deny that educated professors should know the basics of how a greenhouse works, or that Minnesota isn’t in danger of an imminent glacial flood. But it’s worth looking at the examples he uses of the questions they should be asking. What does his idea of intellectual curiosity look like?

It turns out his idea looks nothing like actual scrutiny and a lot like tired conservative talking points. Every semester, he gives his students a series of questions on the climate, to demonstrate how much they need to learn to understand the issue of climate change. One example he gives is this: “Which greenhouse gas accounts for more of the tropospheric greenhouse effect than all the other greenhouse gases combined?”. The correct answer is “water vapor”.

This question seems innocuous, but it’s alluding to a common argument from conservative climate skeptics.  It’s been debunked countless times, and we’ll go over it in more detail later, but it’s not good-natured skepticism. It’s propaganda.

Then there is this passage, where he argues that glacier melt is actually a good thing: “Under such conditions, rivers that swell every spring from snowpack melt would stay swollen into late summer from glacial melt. This is almost always a good thing while it lasts since the extra water helps people downstream irrigate their crops. (Moisture trapped in a mountain glacier is useless when it is not downright destructive.) This is yet one more reason why a warming climate is preferable to a cooling one. “

I’ve never seen the glacier variant before, but the argument that Global Warming is either a good thing or at least not bad is as common as the water-vapor myth, and is held even by the higher-ups in our current government.

Finally, there’s his choice of scientific issue to challenge. 97% of climate scientists agree with the scientific consensus on global warming, roughly as many as believe in evolution, and significantly more than believe vaccines are entirely safe. But he harangues his colleagues over the only one of those topics explicitly mentioned in the Republican party platform.

None of this should be surprising. Weber begins his piece, not by talking about scientific illiteracy, but by slamming Barack Obama for politicizing the Sandy Hook Massacre in 2012. Apparently, only 6 days after the atrocity, he was calling on foreign diplomats to honor the dead children by fighting Global Warming.

This non sequitur betrays his political agenda. A dispassionate skeptic would not spend nearly 500 words attacking a former president for politicizing a tragedy six years ago, in an article about academia. More than that, the vignette sounded off to me. Barack Obama was not a perfect president, but this did not sound like the man who broke down in tears in his response to the massacre.

I found a transcript of the remarks Weber was talking about. I encourage you to read them for yourself. They are eloquent, and insightful, a sermon on the fundamental experiences that we all share, on the capacity of tragedy to bring out the best in people, and how important it is that we remember that unifying force as we face the new, global challenges of the 21st century. I was moved.

He also barely mentioned global warming. It was only one of several examples of what we must unify to face, an afterthought in his remarks. Despite quoting extensively from the speech, nothing Weber said about it was true. He even got the date wrong. The Sandy Hook massacre happened on December 14th, 2012. He said Obama gave the remarks six days later, on the 20th. But according to the presidential archives, he gave that speech on the 19th. I’m not sure how he missed it, but it was probably an honest mistake.

In an interesting coincidence, the conservative magazine “The Washington Examiner” ran an article about the speech that implied Obama used it to make Sandy Hook about global warming, and that article came out on the 20th.

That’s Not How Science Works

The fact this piece is naively parroting Republican talking points doesn’t disprove its core thesis that we shouldn’t “brainlessly push climate-change alarmism.” Having a conservative agenda does not mean you’re wrong.

Being wrong does. Weber’s article taking his colleagues to task for their scientific illiteracy makes grievous errors every time he turns to scientific topics.

Take the water vapor question I mentioned earlier. The implication is that, because water vapor is more important in shaping our climate, that we don’t have that much to worry about from a small increase in carbon dioxide.

There are three issues with this line of reasoning. First, if there were no greenhouse effect at all, the earth’s average temperature would be about -18 degrees celsius, 33 degrees colder than it is today. Carbon dioxide and methane may only account for ten of those 33 degrees, but it turns out you don’t need to make the earth colder than summer on Mars to royally fuck up human civilization. Upping the carbon dioxide is enough on its own.

Second, more carbon dioxide in the atmosphere leads to more water vapor too. As carbon dioxide traps heat in our atmosphere and warms the planet, the oceans warm up, too. As they warm up, more water evaporates and becomes water vapor in the atmosphere. And as Weber presumably learned in the fourth-grade science class he assures us he didn’t sleep through, hot air holds more vapor than cold air.

Third, there is a special, highly technical process in climatology that regulates the amount of water vapor in the atmosphere and prevents it from causing catastrophic problems like carbon dioxide does, at least on its own. It can amplify already-existing warming trends, but it could never cause them, because this obscure process prevents it from building up in the atmosphere for long.

It’s called “rain”.

I wish this were the only time he mangled basic science to prove a point. But he seems incapable of getting anything right once he starts talking details. A few of his other errors include:

He confidently explains how increased glacial melt is a good thing, because it leads to a longer swell season downriver and helps people with their crops. Apparently, he hasn’t given any thought to what happens when there isn’t any more glacier to melt. Once it’s gone, there’s no more swell season downriver because there is no downriver: the water dries up as the glaciers disappear. Forget the flood danger, those farmers will lose their water supply.

In his conclusion, he describes a time he asked a climatologist to describe a foolproof experiment to prove humans cause global warming, if money weren’t an issue. The climatologist can’t think of one. When pressed for an example, Weber says such an experiment could involve the Antarctic ice shelf. He explains that West Antarctica and the Peninsula should be warming more slowly, since it’s surrounded by moderating ocean and has more water vapor, and East Antarctica should be warming the fastest, since it’s far from the oceans and cold enough to have little vapor. Since that’s not what’s happening, global warming seems dubious.

This is the opposite of the truth. We’ve known East Antarctica was more stable than the rest of the continent for decades, because of how the ice sheet interacts with bedrock. The bedrock is higher in the east, which means it’s harder for water to get underneath the ice sheet and accelerate the melting process. No such luck in West Antarctica. He got the expected outcome backwards.

Early on, he mocks his colleague for believing their Minnesota town is threatened by climate-change related floods, due to glacial melt. While he’s correct that glacial melt isn’t the problem, global warming actually does mean greater flood risk in the Midwest. Warmer ocean temperatures lead to larger storms and longer storm seasons, creating both more snowmelt in winter and more water on top of that in spring. This has caused record floods along the Mississippi and its tributaries in 2011, 2014, 2016, and 2018.

It’s also why there’s an active flood warning in his county right now.

Screen Shot 2019-04-05 at 5.44.38 PM

Weber not only betrays a lack of understanding of science and the impacts of global warming, he also displays little knowledge of the scientific method. It’s telling that, when asked to give an example of a hypothetical experiment to prove or disprove global warming, he gives one data point that is still subject to environmental factors. He seems fuzzy on the difference between “experiment” and “argument”.

Perhaps I should cut Myles Weber some slack. After all, he’s not a climatologist. He’s not a meteorologist. He’s not a scientist at all. He’s an English professor. His scientifically illiterate colleagues are English professors. I doubt any of them have taken a science class since the Reagan administration.

What sets him apart from his colleagues is another philosophical virtue: intellectual modesty. His colleagues know that they don’t know shit about climate science, so they blindly trust the countless researchers who do. Myles Weber, however, believes that he is more qualified to discuss the topic. He believes his skeptic’s mind gives him all the tools he needs to evaluate climate science, despite not referencing any scientific studies in his 3,000 word thinkpiece.

It’s unfortunate, because anyone who’s taught an undergraduate English course ought to know the dangers of confidently arguing something when all you’ve read about it is online summaries.

Play Stupid Games, Win Stupid Prizes

There is nothing remarkable about some dude writing an inane hot take about global warming. What struck me about this piece wasn’t the ignorance or the lack of self-awareness, but how petty it was about it.

It’s one thing to hypocritically complain about your colleagues in private. It’s another to do it publicly, on a major website at the heart of a political movement. He publicly shames his own coworkers for not remembering how greenhouses work, accuses them of being dumber than 10-year-olds, and does it all with the absolute certainty that his tragic misunderstanding of the sciences is correct.

In the words of one of the 21st century’s great philosophers, “Don’t be clowning a clown, you’ll wind up with a frown.” If Myles Weber wants our political discourse to be more petty, I am happy to oblige.

I’ve mentioned his strange choice of intro topic, a six-year-old speech by Barack Obama that had nothing to do with global warming, before. But it’s even worse when you realize he’s an English professor who ought to know better. The introductory paragraph of an essay is supposed to pull the reader in, explain your thesis, and provide a road map for how you’re supporting that thesis. Two paragraphs in, I’m not curious, just confused. I don’t know what he’s arguing, or how he’s gonna support it.

In his second paragraph, he says the punishment for treason is “if I’m not mistaken, death by firing squad.” He’s mistaken. US law doesn’t specify a method of execution for each crime, and most states use lethal injection. Only three people have been executed by firing squad since 1960, all of them in Utah. So you can add the US penal code to the growing list of topics this man knows nothing about but has opinions on anyways.

One more stylistic note: he has a “here’s my point” sentence, which is lazy writing on its own. Even worse, it’s in the last section of his article, only 3 paragraphs from the end. You shouldn’t have to tell your audience what your point is 80% of the way through your essay. If they haven’t figured it out on their own by then, you have bigger problems.

Perhaps the most poignant section of Weber’s piece is a vignette about a time he had dinner with an academic acquaintance. Weber turns the conversation to the scientific illiteracy of his colleagues, and the acquaintance makes a dismissive statement and changes the subject. Weber calls him on the fallacy, and rather than engage in debate, the acquaintance moves on and doesn’t call him again.

You can feel the discomfort of his poor dinner date in that passage. Here is someone who probably just wants to network, stuck in a restaurant with a man who won’t stop ranting about global warming and complaining about how stupid his colleagues are. He tries to change the subject, but the tenured professor won’t let him.

Earlier, Weber refers to the colleague who forgot how greenhouses work as “our department’s self-appointed expert on climate matters”. From what he’s said of how he talks to his students and coworkers, I am 99% certain that the only self-appointed expert in his department is him.

Finally, I want to talk about the stylistic choice that makes Myles Weber’s piece the peak of pretentiousness in academia: the way he spells “academia”.

Or rather, doesn’t: he uses “academe” instead, a word I’d never seen before. At first, I thought, “oh, he’s probably using the the pretentious original Latin term to be technically correct”. But I was mistaken. “Academia” isn’t just correct English, it’s correct Latin. So where did “academe” come from?

Well, it /is/ technically a word, but not a common one. It’s a synonym of “academia”, and doesn’t add any nuance or specificity beyond that. There’s no reason to use it, unless you want to show off how many big words you know.

It’s also synonymous with “Pedant”.

Halfway Around the World

As fun as it is to mock morons for believing stupid things, this article and its many flaws should be sobering to all of us. Mark Twain once said “A lie can travel halfway around the world before the truth can get its boots on”, and never has that been more true than today’s massively online age. This misinformed piece has nearly 400 comments. It’s been read by thousands, even tens of thousands, of people. It came out, they read it, internalized all the nonsense it spewed, and moved on, all in just a few days.

And it took me two weeks to research and finish this response.

Today, two weeks is an eternity. Two weeks is longer than the lifespan of most memes. Reddit threads and Facebook posts disappear from your news feed in a matter of hours. By the time other thinkers can prepare their critiques, the misinformation has already come and gone. It’s old news, accepted into the general narrative, and attempts to correct it come across as necroing old threads that aren’t relevant anymore.

But they are relevant. Just because hard research moves at a comparatively glacial pace doesn’t mean it’s any less crucial today. Misinformation thrives on our impatience. It’s how, six years later, a tenured professor still believes that Barack Obama politicized Sandy Hook to push climate action, even though he never did. It’s how this article manages to change minds and move the conversation even though none of it is true. It’s how I got away with using that quote about lies traveling halfway around the world, even though Mark Twain didn’t actually say that.

In the end, Myles Weber and I agree. Healthy skepticism is a good thing, and something isn’t true just because a bunch of experts in white coats say it is.  As long as the internet is free, there will be morons out there using it to peddle pseudoscientific dogma, and when fact-checkers can’t keep up, we have an obligation to watch out for them on our own.

As a rule of thumb, they’re usually the people urging you not to believe the experts.

A Non-Fan’s guide to the Marvel Cinematic Universe before Avengers: Endgame

Avengers: Endgame releases in 28 days. It is going to be a cinematic phenomenon on par with the first Star Wars reboot. But it’s also the 21st film in an increasingly interconnected series. And if the last one is anything to go by, the Russo Brothers will not be giving us much exposition to help people who aren’t caught up.

Not all of us have seen every movie in the Marvel Cinematic Universe, or remember every detail. And with 19 films out already and another in theaters, it’s not feasible for people to go from 0 to 100 in time for the release date. But not all those movies are necessary to get the later ones. The story of the MCU isn’t a straight line like the Harry Potter Series. It’s more like a tree, which branches out over time. And some of those branches are dead ends.

This guide will explain which movies are dead-ends, and which ones are must-watch. It will also cover the “summary” movies, movies which are relevant to the ongoing story, but in small enough ways that you can get by with a quick refresher. Ideally, you should be able to understand everything happening in Avengers: Endgame, just by watching all the “Must-Watch” movies on this list and reading the summaries I’ve written for the others. And because people do have social lives, there will be only 8 must-watches. That’s enough that, if you start this weekend, you only have to watch one movie every Friday and Saturday evening. Steep, but manageable, and even easier if you’ve already seen one or two of the must-watches.

Bear in mind: this is NOT a list of which Marvel movies are the best, or most important. This is a guide for how to get up to speed in time for Avengers: Endgame. That means cutting out some movies that are otherwise very good, but ultimately skippable within the current continuity.If you go by this list, you will miss four of my top five MCU flicks. So, if you enjoy the ones you do see, consider going back and checking out the ones I’ve only summarized.

With that in mind, let’s dive right in.

Iron Man (2008): Must Watch

You can’t skip the first movie in the MCU. It’s what sets the stage for all the rest. You need to know who Iron Man is, and what makes him tick, because he’s a critical character in all the later movies. But more importantly, you need to get used to the tone of these films, and how they start to fit together. Do not miss this one.

The Incredible Hulk (2008): Skip

Not only does this movie not add any important detail to the continuity, watching it will actively make you more confused. It is the easiest skip in the entire series.

Iron Man 2 (2010): Summary

SOME SPOILERS AHEAD: Iron Man 2 is almost completely skippable, but there are a few details you need from it. The important parts are:

1. Pepper and Tony Stark are dating now, 2. his buddy Rhodey now has his own suit called War Machine that’s based off Tony’s old tech, and 3. there’s a badass secret agent working for Nick Fury and played by Scarlett Johannsen. Her name is Black Widow.

Thor (2011): Must Watch

Thor is a kinda unremarkable movie on its own, but, and I speak from experience here, if you try to watch the Avengers without seeing it first, you will be VERY confused. Thor and Loki don’t get much of an introduction in The Avengers, and you need to know who they are. A summary won’t do it. Just watch this one.

Captain America: The First Avenger (2011): Summary

Captain America, on the other hand, gets a pretty good intro in the next must-watch movie, so you don’t need to see this one as badly. The important details:

Captain America is a super-soldier created during World War II to fight the Germans. Specifically, he got sent after their special science division, Hydra. Hydra had a blue cube called The Tesseract that they used as a power source for all kinds of lasers and shit that you don’t want Nazis to have. They win, but his best friend Bucky gets killed and the Tesseract gets lost for days. You’ll understand how he’s still alive in the present when you watch the next movie.

Marvel’s The Avengers (2012): Must Watch

Chances are, you’ve already seen this one, because everyone’s seen this one. But on the off chance you haven’t, yes, you have to see it before Avengers: Endgame.

Iron Man 3 (2013): Skip

There is precisely one important thing from this movie: Tony Stark has made a LOT of Iron Man suits now, and he’s gotten really good at it. He can also get one on and off with basically no outside help now.

Not only is there nothing else of significance in this movie, other plot points in it will actively confuse you without some added context. It’s a good movie, but skippable.

Thor: The Dark World (2013): Skip

This is probably either the second- or third- worst movie in the MCU, and is famous for not changing the greater story at all. The only thing you need to know is that something called an “Infinity Stone” is now in the hands of a creepy dude called “The Collector”.

Captain America: The Winter Soldier (2014): Try to Watch

If you’ve already seen one of the Must-watch movies and you still have time, this is the one you should replace it with. It’s amazing and important, but not /quite/ as important as some of the others. Key details:

Remember how Captain America’s friend Bucky was killed? Just kidding. He’s alive, he’s got a super-strong metal arm, and he’s been brainwashed into being a villainous assassin. This guy:

winter-soldier-civil-war

You know Shield, Nick Fury’s organization with the helicarrier? It’s collapsed. It is no more. And since it was keeping Loki’s scepter from the Avengers safe, it’s lost.

Black Widow and Steve are good friends now.

And Captain America has a black sidekick in a flying metal suit, now, too. His name is Sam Wilson AKA The Falcon, and he’s got this fancy wingsuit that lets him fly. He’s also got machine guns. He’s not a /major/ figure in the later movies, but keep an eye out for him. This guy:

falcon

And seriously, just watch this movie if there’s any way you can fit it in.

Guardians of the Galaxy (2014): Must Watch

This movie is about an entire team of other heroes, separate from the Avengers, and you aren’t getting any better introduction for who they are. It also covers a pile of other important details in other movies. Don’t miss it.

Avengers: Age of Ultron (2015): Must Watch

It’s a close call between this one and Winter Soldier, but ultimately the characters you meet in this movie are more important to the rest of the plot than the ones you meet in Winter Soldier, AND it does more to set up future movies. You will definitely be confused by later movies if you skip Age of Ultron.

 

Ant-Man (2015): Skip

Ant-Man is a dude named Scott Lang who has a suit that can make him shrink. He’s a cool guy.

antman

He’s also had some run-ins with the “Quantum Realm”. Basically, if his shrinking tech breaks the right way, he’ll just keep shrinking until quantum physics takes over and he’s in this weird other world with different laws of physics from our own. It’s really trippy, and it might be important later.

Captain America: Civil War (2016): Must Watch

There is so much in this movie that you need to know to understand what’s happening later on. It’s basically Avengers 2.5. Do yourself a favor and watch this one.

Doctor Strange (2016): Summary

Doctor Strange is a really important character in future movies, and you will be confused if you don’t watch this one. But he’s only one character, and other movies are more important. After Winter Soldier, this is probably the one you should see for maximum comprehension. Here are the details:

doc-strange

This is Doctor Strange. That isn’t a codename, he’s a literal doctor named Steven Strange. Because of shenanigans, he becomes a very powerful wizard charged with protecting the earth from magical threats. He’s got a sidekick named Wong who is also a sorcerer, and a living cape that has more personality than half the villains in these movies. Oh, and you know those Infinity Stones everyone keeps talking about? He wears one around his neck like some all-powerful bling. It’s the Time Stone.

With that one, quick recap on Infinity Stones. There are six of them in total. We know where five of them are. The Tesseract (Space Stone) is stashed away in Odin’s vaults on Asgard. The Reality Stone is with the Collector, that dude from Guardians of the Galaxy (mentioned in Thor 2). The Power Stone is kept in a vault by the Nova Corps, also from Guardians of the Galaxy. The Mind Stone is in Vision’s forehead. And the Time Stone is wonderfully accenting Doctor Strange’s aesthetic. The sixth is MIA.

Guardians of the Galaxy, Vol. 2 (2017): Summary

Key details:

Remember Nebula, the blue cyborg lady? She and Gamora have made up. She’s got a ship and she’s trying to hunt Thanos.

The Guardians have a new member now. Her name is Mantis, she has the power to make people sleep by touching them and to read their feelings like telepathy for emotions, and she is the most adorable special snowflake who must be protected. This is her:

mantis.0

Spider-Man: Homecoming (2017): Skip

This movie is amazing and you should definitely go back and watch it. It also has basically nothing important to the greater story. You already know Spider-Man, because he was in Civil War. There aren’t any new characters. But seriously. It’s really fun. And you should see it when you get the chance.

Thor: Ragnarok (2017): Must Watch

If you skip this movie, when you start Infinity War, you will have no idea what is happening. It’s also probably the funniest movie in the entire MCU. You can’t miss it.

Black Panther (2018): Summary

It hurts to say it, because this movie is amazing, but like Spider-Man, you already know this guy from Civil War. You should see it, like, if you haven’t already you’re missing out, but you don’t have to see it before Endgame.

Key details:

Black Panther’s country of Wakanda isn’t actually a poor third-world nation. It’s African Atlantis, a hidden super-advanced civilization where everything is made of Vibranium like Captain America’s Shield. He’s got a little sister named Shuri who makes all the tech, she’s super smart and super snarky. The captain of his personal guard is a bald warrior woman named Okoye, this lady:

okoye

Honestly, just watch this movie if you can. It’s so good.

Avengers: Infinity War (2018): Must Watch

I know. Shocker.

Ant-Man and the Wasp (2018): Skip

Remember that Quantum Realm stuff? He has a portal that can take him there and back safely now. Oh, and that thing at the end of Infinity War also was a thing with his sidekicks, and he’s stuck in the Quantum Realm because of it. That’s all.

Captain Marvel (2019): Summary

This is an important one, but it’s still in theaters so watching it’s a bit more of a hassle, so key details:

marvel

That’s Captain Marvel. She went through some shenanigans and came out with all kinds of super powers. She can fly, breathe in space, I’m pretty sure she’s bulletproof, she can shoot giant destructive laser beams from her hands, she’s super strong, super fast, she’s got a Thor-level power suite. She’s been away from Earth since the 1990s, but back then she was good friends with Nick Fury. Remember the Kree, bad guys from Guardians of the Galaxy 1? She’s got a history with them. It’s complicated.

Avengers: Endgame (2019): Must Watch

If you want to understand what happens in Avengers: Endgame, watching Avengers: Endgame is probably a good idea.

What to Watch if You Have Extra Time

If you only see the Must-Watch movies in this list and read the summaries for the rest, you will probably understand 95% of what you see in Avengers: Endgame. But understanding is not the same as appreciating, and there is a LOT of extra detail and character development that you’ll miss if you stick to this list. So if you have extra time, of if you’ve seen some of the must-watches, here’s a rough list of which movies you should try to squeeze in.

  1. Captain America: the Winter Soldier. It’s by far the most important one of these movies if you want to understand what happens, and it’s one of the best movies in the MCU. It’s your top priority.
  2. Doctor Strange/Black Panther. Doctor Strange plays a bigger part in the next few movies, so you’ll be less confused if you watch it. On the other hand, Black Panther is a better movie, and it’s kind of a cultural phenomenon on its own. Depending on your priorities, you should pick one or the other.
  3. Captain Marvel/Spider-Man: Homecoming. Captain Marvel is still in theaters, of course, but she’s probably gonna be important in Avengers: Endgame, and she isn’t in any of the other movies. On the other hand, Spider-Man: Homecoming is a better movie overall, and it’s easier to find.
  4. Ant-Man and the Wasp. If you do this one, read up on what happened in the first one in more detail than I gave. There are a lot of characters and abilities that are important in the Ant-Man movies but don’t come up in the crossovers. I only mention it because it’s on Netflix right now.

After those, you can really add any movie you want. Just make sure if possible to watch them in the order they were released, because it gets complicated otherwise. Also, add Thor 2, Iron Man 2 and 3, and Hulk last. They add the least to your overall viewing experience, and they’re all worse than the other skip movies.

 

The Marvel Cinematic Universe can be daunting for new viewers. There are so many movies, and they’re so connected, that it’s easy to get lost in the continuity or to give up before watching one of the main events.

But it doesn’t have to be. Even now, a month away from the Endgame, there is still plenty of time to get on this hype train before it leaves. You might not be able to get the full Marvel fan experience, but if you haven’t already seen some of these movies, chances are that’s not something you want anyways. And if you follow this guide, you’ll be fully up to speed by the time Endgame releases, without even having to spend the entire time glued to a screen.

An Elegy to the Weirdest Dude in American Politics

Earlier this month, Lyndon LaRouche died.

You’ve probably never heard of him. But you should have. He was the best presidential candidate in American history. Or rather, he was the best at being a presidential candidate in American history. He ran for the office eight times, a national record. In every election from 1976 to 2004, he was a candidate. His base was small in numbers but big in enthusiasm, and their support kept him relevant despite never winning a race for any public office.

Remember those posters of Obama with a Hitler Mustache that went viral during the Obamacare debates in 2009? That was him. But don’t confuse him with right-wing lunatics like  Steve Bannon or Steve King. Lyndon LaRouche defied any attempt to fit him into a political box.

His platform changed significantly over the course of his 50-year career in the spotlight, but generally speaking, he and his devoted followers supported investment into nuclear power, a return to a commodity-based monetary system (think the gold standard) and fixed interest rates, defending our way of life against an international conspiracy of the global Aristotelian elite that is led by Her Majesty Queen Elizabeth II to control the world through a combination of terrorism and the drug trade, and changing the pitch we use to tune our instruments to be slightly lower.

It is easy to read that and dismiss him as “what you get when you give a scientologist two pot gummies and lock them in a room with a flat-earther”, or “the most batshit insane conspiracy theorist I have ever seen” or “That Guy”. But it would be an injustice to lump him in with the likes of L. Ron Hubbard and Alex Jones. They don’t hold a candle to him.

Today, I will tell you the tale of Lyndon LaRouche. We will explore his life, his finest achievements and his greatest defeats. We will examine his worldview, and where his bizarre ideas and priorities came from. We will uncover the secrets of his enduring appeal. And we will examine the times where he actually got it right.

Which is more often than you might think.

1. “Sing in me, Muse, and through me tell the story of that man skilled in all ways of contending”

Early Life

Lyndon LaRouche was born in Rochester, New Hampshire, in the year 1922. His parents were Quakers, who forbade him from fighting other children, even in self defense. This consigned him to a neverending torment of bullying throughout his early years. He took to wandering alone through the woods, and threw himself into the comforting grip of books, particularly philosophy. From his 1979 autobiography: “I survived socially by making chiefly Descartes, Leibniz, and Kant my principal peers, looking at myself, my thoughts, my commitments to practice in terms of a kind of collectivity of them constructed in my own mind.”

He was, in short, a geek.

After high school, he attended Northeastern University in Boston, but was “disgusted with the methodological incompetence of most of [his] courses”. He left in 1942.

From there, he wandered through the maze of far-left groups that existed among students at the time. He joined the Socialist Workers Party in 1949, but dismissed them as merely “a custodial staff keeping premises warmed and aired out for the arrival of actual revolutionary leaders”. His contemporaries described him as having an extraordinary breadth of knowledge and “a marvelous ability to place any world happening in a larger context, which seemed to give the event additional meaning”. But the same accounts said his analysis was only skin-deep. His ideas were often contradictory and lacking in detail.

He was also a clear egotist. While a Marxist and Trotskyist by name, LaRouche fixated on their discussion of the elite intellectuals who would join the working class’s revolution. He thought they were talking about him. He believed that he was that philosopher-king who could lead the masses to victory in the US as Lenin had in Russia.

These impulses magnified through the 1960s and early 70s. He took to savagely critiquing his fellow leftists for their past disagreements with him, asserting that history consistently proved him right (note: Lyn Marcus was his pen name at the time). He predicted economic depressions and imminent fascist overthrow and communist revolution. This made him a polarizing figure on the left, with a few ever-more-devoted adherents being counterbalanced by total abandonment from the rest of his audience.

The 1970s: a Leader Emerges

In 1973, he formed the first of his many organizations: the National Caucus of Labor Committees. It’s worth reading over its founding principles for yourself, but I’ve picked out the most important:

12. “Therefore, the political existence of the working class depends upon the intervention of an “outside agency,” whose function it is to bring the political (working) class for itself into being. This “outside agency” can only be a social formation which has already attained an advanced approximation of the working-class consciousness which the working class itself lacks. Only a handful of the capitalist intelligentsia is capable of fulfilling this decisive role, by combining an anti-capitalist political and social orientation with the mastery of history, sociology and economics from the standpoint of the dialectical method.”

17. “While the cadre organization must submit to the class interests of the potential political (working) class for itself, that means and demands insulating the vanguard organization from corrupting intrusions of reactionary (bourgeois) ideology dominant among working people generally, oppressed minorities, and radical students, etc., in a capitalist society. Realization of socialist conceptions means that alien political ideas have ipso facto no voting rights over the formulation of policy within the vanguard organization. It means that the less-developed consciousness of socialist principles must be subordinated to the most-advanced consciousness within the organization.”

In other words: LaRouche and his followers didn’t think an organic labor revolution was in the cards. They believed that the workers of the world needed to be united by an outside force of intellectuals. Only a few special minds would be up to the task. In practice, that meant Lyndon LaRouche and those who agreed with everything Lyndon LaRouche thought.

What’s more, the closing-off of dissenting voices was a foundational idea in the LaRouche movement. For the revolution to succeed, it had to be protected, even from the people it was ostensibly for.

The NCLC quickly took on the trappings of a cult. In 1974, the New York Times wrote of its practices:

“Total commitment is required for members. Jobs and families are expected to be secondary to party work; almost all members who are married also have spouses in the movement.”

It also describes a darker side. The early NCLC was obsessed with brainwashing, and LaRouche himself participated in a “deprogramming” incident with a member named Christopher White. He taped the entire affair, and in it one can hear sounds of weeping, vomiting, pleas for mercy, and LaRouche’s voice saying “raise the voltage”.

He probably selected White because he was British. Britain was a boogeyman to LaRouche, and remained so until his death. He believed that Imperial Britain never truly died, and that it continued to secretly fight to sustain capitalism. He believed that it and its allies controlled vast swathes of the world including the US intelligence agencies. And since he believed that he was the only one who could lead the revolution, they obviously were preoccupied with assassinating him.

Note: interestingly, the FBI really WAS monitoring LaRouche, and some agents even proposed taking steps to help the US Communist party eliminate him. The Bureau has long abused its authority to harass and disrupt both violent and non-violent leftist groups, and attacking LaRouche would not be out of character. I am, however, somewhat more skeptical that their tactics included mind-control.

By the mid-70s, LaRouche had abandoned any pretense of alliance with other leftist groups. Shortly after its formation, the NCLC would begin “Operation Mop-Up”: packs of LaRouchies would roam the streets of New York, beating to a pulp any members of rival leftist groups they found. One harrowing account was printed in the Village Voice.

He also began to reach out to a different kind of extremist groups: far-right fascists. He abandoned Marx (note: if you value your non-aching head, do not try to read that), became a vicious anti-environmentalist, and made overtures to groups like the KKK. This led many outside the movement to say he became a far-right fascist, though he continued to find allies at the fringes of both the left and right for the rest of his career.

Before we leave this era, I should mention that it saw his first Presidential campaign, in 1976. His platform predicted the apocalypse in less than 2 decades if he did not win. It also featured a paid half-hour address on prime-time T.V., which would become a mainstay of his candidacies.

He received just over 40,000 votes nationwide.

The 1980s: Pride and a Fall

LaRouche reached the height of his power in the Reagan administration. His organization moved from New York to a mansion in the sleepy town of Leesburg, Virginia. They turned it into a fortified compound, guarded by camouflaged devotees armed with guns. They harassed the locals, accusing the local garden club of being a Russian PsyOp and forcing one lawyer to abandon the town.

In his greatest electoral achievement at the state and federal level, LaRouche-affiliated candidates managed to win the democratic primaries for both Lieutenant Governor and Secretary of State in Illinois. The candidate for Governor declined to run alongside the LaRouchies and switched to a third-party bid. The republican ultimately won the race.

At the same time, he began to spread the most pernicious lie of his career: that AIDS wasn’t sexually transmitted, but could be caught from just a cough or a touch, and that victims should be quarantined for the good of the public. He put a proposition on the ballot in California to ban AIDS patients from holding jobs and their children from attending schools. While it was defeated, it got over a million votes. It seemed only a matter of time before his movement would win one of these races.

This success would not last. The International Plot had finally found his weakness: the money.

Throughout the 1980s, the FBI had quietly been investigating LaRouche and his organization for a host of alleged financial crimes, including widespread credit-card fraud. As the decade wore on, the list of active investigators expanded to include both the IRS and the Federal Election Commission.

In 1986, the FBI raided LaRouche’s offices in both Virginia and Massachusetts. They found extensive evidence of a host of financial crimes, enough for a Boston grand jury to indict LaRouche and the leadership of his movement. That jury, led by a young US Attorney named Robert Mueller, alleged that LaRouche’s organization had committed over 2,000 cases of credit card fraud, and made extensive efforts to obstruct the investigation and destroy evidence. The defense lawyers described (accurately) an extensive campaign of harassment against the organization by the FBI. The trial began in early 1987, and dragged on well into 1988. In the end, the government was forced to declare a mistrial: the case had gone for so long, and enough jurors had been excused for having to return to work, that they could no longer maintain a jury.

The case then moved to Virginia, where it progressed more quickly. It also took on more charges: for some reason, the government felt it was strange that a man who lived in a fortified mansion hadn’t filed a tax return in a decade.

LaRouche was ultimately convicted on fourteen different counts of varying forms of conspiracy and fraud, and sentenced to 15 years in prison.

The Long Goodbye

Incarceration did not stop Lyndon LaRouche. He received daily intelligence reports from his organization while in prison, and even ran for president from a jail cell in 1992. He also shared a cell with the infamous televangelist Jim Bakker, who later remarked that “to say LaRouche was a little paranoid would be like saying that the Titanic had a little leak.”

LaRouche was released early in 1994. But it was never quite the same. His organization had been hollowed out by the investigation, and what was left had atrophied without his peculiar charisma.

At first, he threw his resources behind a campaign to exonerate him, but it sadly failed. With that failure, he turned towards 9/11 conspiracy theories and other, almost run-of-the-mill fringe beliefs like global warming denial.

He also began to suffer a clear mental decline. His writings had never been the clearest, but they grew more and more bizarre, almost nonsensical. They lurched uneasily from discourse on sense-perception to asides on Truman’s presidency, with little rhyme or reason.

There are three remaining events of import in his story.

The first was his accurate prediction of the 2008 financial crash. It’s a real achievement on his part, albeit watered down by the fact that he had been predicting the crash was imminent for several decades. Better late than never.

Second, he was a pioneer of the most pernicious smears of the Obamacare debates. I mentioned the Hitler-mustache posters earlier, but he was also an early adopter of the death-panels myth, referring to the healthcare bill as “genocide” months before the republicans caught on. It would later become Politifact’s “Lie of the Year”.

Finally, his last political act was to throw his support behind Donald Trump’s campaign. Because of course he did.

LaRouche died on February 12 this year. He was 96. He leaves behind a tight-knit group of several thousand devotees, many of whom have stuck with him since he first founded the NCLC.

For his entire career, LaRouche was an enigma to those of us on the outside. His bizarre mix of issues and theories seemed to have no rhyme or reason whatsoever. But there was a method to the madness, and a reason why so many people stayed so loyal to him for so long.

2. “The Good therefore may be said to be the source not only of the intelligibility of the objects of knowledge, but also of their existence and reality”

In order to grok Lyndon LaRouche, you first have to grok his worldview. And for that, we have to go back all the way to ancient Greece, and the discourse of Plato and Aristotle. I am not going to explain every aspect of the two philosopher’s beliefs, but the most pertinent debate covered the obscure and minor subject known as “the nature of reality”.

Plato believed in an idea of the “forms”: roughly, reality, ugly, imperfect, and chaotic, was a reflection of a set of greater Ideas. The chair I’m sitting in is just an imperfect realization of the Idea of “chair”. He described our struggle to understand and contemplate those forms in the famous “Allegory of the Cave” that you probably read in high school.

Aristotle took a more naturalistic approach. He did not believe that there was some perfect representation of “chair” out there for us to understand. He saw these as unproductive thought experiments. He preferred to ground himself in observations of the natural world around him.

Lyndon LaRouche has turned that dispute into the driving force of all of human history since. To him, every intellectual in every discipline has, without realizing it, been a follower of Plato or Aristotle. The pursuers of the Ideal, and the Church of Sense Perception. Beethoven, Shakespeare, and Kepler are Platonists. Kant, Locke, and Hobbes are Aristotelians.

He also takes a side. The Platonists are right. The Aristotelian tradition of pure naturalism has led us astray. He blames the failed leadership of the Aristotelians (which are usually oligarchs) for most, if not all, of the world’s ills. To him, the intellectual descendants of this Greek philosopher powerful and power-hungry, and we must stop them at any cost.

And Lyndon LaRouche is the only man who can do it.

Many have tried to label LaRouche as either a “Marxist” or a “Fascist”. But they never fully fit. He is at times one, and then the other. His political ideology does not match any other movement in our history, because philosophically, he acts on a different axis. They, like the poor souls in Plato’s cave, see only the shadows his worldview casts on the wall. The light flickers, and the shadow changes from left to right and back again, but the object stays the same.

Likewise, the many accusations of anti-semitism, racism, and homophobia fail to land. While he may have been all three, and he certainly used racist, homophobic, and anti-semitic language and pursued hateful policies against each, such concerns were tangential to him. It’s the reason why he managed to lead a movement with so many jews it alienated the KKK while authoring pieces titled “My View on the Jewish Question”. He didn’t use pseudo-philosophical conspiracy theories as coded language for antisemitic beliefs. He used antisemitic rhetoric as coded language for his pseudo-philosophical conspiracy theories.

Once you accept the core premise of Plato vs. Aristotle, you begin a descent down a truly bottomless rabbit-hole of paranoia and pseudo-history, pseudo-philosophy, pseudoscience, really all the pseudo’s you can muster. Kepler’s theories of planetary orbits become a real-life manifestation of the platonic forms. They, in turn, are a metaphor for the discrete stages of human progress (an idea originally taken from Marx). The deeper you go, the more his bizarre panoply of policies make sense. Take his opposition to environmental regulation and support of nuclear power. If you see the purpose of humanity as unlocking these higher spheres of progress, like expanding out through Kepler orbits, then anything that obstructs our progress is an evil thing. We can’t afford to waste time caring about our impact on the world around us if it would arrest our forward motion intellectually. And we’ll need a lot of energy to fuel our philosophical explorations as more people devote their time to the arts…

If it doesn’t make perfect sense to you, don’t worry. I’d be concerned if it did.

 

3. “All Men by Nature Desire to Know”

It is easy for those of us on the outside to dismiss LaRouche’s ideas as the worst sort of crypto-fascist nonsense and his followers as misguided morons. We all want to think that there is no way we’d ever fall for such blatant malarkey. But the science of cults tells us that belief is as much a fantasy as anything that came out of Lyndon LaRouche’s mouth. If you don’t believe me, I suggest you check out this Ted Talk from a former member of a different cult on how they lured her in, though I should warn you, it has some awfully disturbing imagery.

At its core, the movement’s appeal comes from our own search for meaning. It begins by targeting people who are already struggling with that question: the young and idealistic, the lonely and isolated, and the grieving. Essentially, people who feel in some way unmoored in our large and uncertain world.

LaRouche and his followers take these vulnerable people and give them an anchor. Where the rest of us offer chaos and confusion, LaRouche offers clarity and confidence. He tells you that the world isn’t all that complicated after all. He tells you that the truth is out there, and he can help you find it. Most importantly, he tells you that you are special. That you have a purpose. You were meant to fight in this titanic struggle between good and evil. Only you can learn how the world really works. And your job is to help spread that knowledge to others. All you have to do is listen to the man on the brochures.

The group’s pitch is both personal and interpersonal. Movements like LaRouche’s offer a strong sense of social comradeship. The other members are your tribe now, and they look out for their own. New recruits are encouraged to abandon their old ties to friends and family, replacing them with bonds to fellow LaRouchies. When they do so, they provide the same social validation to the next wave of recruits.

Then there is the initiation process. The other members purge your mind of its preconceptions that might interfere with the movement’s goals, in order to ensure your loyalty. Often, this gets physical (“Raise the voltage”), but it is certainly not always. It can be as tame as attending a rally under the right conditions. Outsiders call this process “brainwashing” and “programming”. Larouche described it as bringing out the recognition that “one’s self as presented to the world is not ‘the real me,’ not the ‘soul.'” I use a different term: hazing. We try to exoticize the process by calling it “brainwashing” to convince us that we wouldn’t fall for it, but the truth is that it is not so different from your average fraternity initiation or elaborate team-building exercise. The operate on the same basic principle: validating your sense of belonging to the group through shared suffering or sacrifice.

Whatever our individual differences, all humans are united by a few fundamental desires. We all want to feel like we belong, like we have some identity to be proud of. We all want a sense of purpose and fulfillment in our daily lives. We all want to feel right, like we are a good person, and like we know what to do next. Ideologies like LaRouche’s are a one-stop shop for all of these basic needs. They give you that identity. They give you that purpose. They give you the certainty that you are right, because LaRouche is right, and you’re with him.

Usually, when non-members talk to true believers, we fixate on the contradictory evidence the true believers rationalize away or ignore. We look on in disbelief as they dismiss overwhelming scientific consensus, the apocalyptic predictions that didn’t happen, even proven criminal activity as “fake news” and “propaganda”. But that behavior is only an exaggerated version of impulses we all share. When we are presented with information that contradicts our worldview or identity, we find a way to disregard it. LaRouche and his adherents are just more audacious in their confirmation bias than we are.

This appeal can work on anyone. Intelligence and education are no shield: the first members of LaRouche’s movement were Columbia students. Rich and poor, black and white, men and women, we all share the same cognitive biases and we all are vulnerable to the undeniable pull of a movement that has all the answers.

 

4. “I much prefer the sharpest criticism of a single intelligent man to the thoughtless approval of the masses”

A funny thing happened while I was researching this post: frequently, far more frequently than I ever expected, I found myself agreeing with Lyndon LaRouche.

Sure, he said some crazy stuff. I have barely scraped the surface of the lunacy he espoused over the course of his career. You could fill a book with all the bizarre, absurd, and incomprehensible theories the man has come up with. Many people have, including LaRouche himself.

But the great tragedy of Lyndon LaRouche, what sets him apart from Alex Jones and Louis Farrakhan and the other also-rans of our political fringe, is just how often he actually got it right.

Take a line like this: “automation not only wipes out jobs, it wipes out the need for old-style, repetitive factory labor. In place of production workers, we will need an equal or greater number of engineers and scientists. Our whole educational system will be hopelessly outdated by these changes in the means of production. Educational changes must be made so that we may have the skills we need.”

Today, that sounds like an almost trite observation of our post-industrial economy. Automation and the collapse of manufacturing jobs across this country helped elect Donald Trump. It is a fundamental force shaping our entire culture. And education and re-education are contributing factors: we have far too many unemployed steel-workers, far too few software engineers, and no good way to convert one into the other. Right now, that passage is uncontroversial.

Lyndon LaRouche wrote that in 1954.

Even his core delusion, the Aristotelian conspiracy, is rooted in a real philosophical dispute. Aristotle and Plato really did disagree. And while I’m unconvinced that their dispute is the driving force of history, when LaRouche starts talking about objectivity and its limitations, he makes an uncomfortable amount of sense:

“In reality, what we call “modern science” is a highly subjective business. People who run around talking about “objective science” really show that they don’t know much about the history of science.”

That statement is true. History has shown us that, while the scientific method may be immune to bias, its practitioners are not. If science and its practitioners really were objective, it wouldn’t advance one funeral at a time.

Note: in the section after that quote, he says a cabal of gay British Aristotelians are concealing the evidence that cold fusion is real and also that rock music is a Satanic plot. I didn’t say the man was perfect.

Every account by outsiders who knew LaRouche describes astonishment at the breadth of his intelligence. He was well-read, sharp-witted, and at times downright insightful. I don’t know whether his paranoia was built into his genes or if some therapy and the right environment of fellow thinkers might have tamed his worst impulses. But I do know that buried within the lunatic was a true visionary.

5. “There is a strange charm in the thoughts of a good legacy”

History will not be kind to Lyndon LaRouche. His lies have done too much damage for him to be more than a figure of scorn, if he is remembered at all. His legacy will be the violence of his followers, his misinformation campaigns about AIDS and Obamacare, and his climate change denialism. That is a good thing. The pain they have caused has already outlived him, and will continue to haunt us for decades to come.

And yet, it feels wrong to celebrate his departure, to reduce him to his greatest crimes. He was more than just another cultist or con-artist hovering at the fringes of our politics. He was a living contradiction, a proof that our understanding of our own culture isn’t as concrete as we’d like to believe. He made allies of black nationalists and the Ku Klux Klan. He was completely insane, but his brainwashed followers are probably better-versed in the classics than you are. He was a politician, a philosopher, a cult leader, a communist, a fascist, a felon, even (horrifyingly) a poet. He was, by a significant margin, the weirdest person in American politics.

Lyndon LaRouche died at the age of 96. For the first time in a century, we are living in a world without him. It’s a different world from where we lived before. It may be better off for his absence. It probably is. But whatever else, it is certainly less interesting.

Conservapedia, or What Might Have Been

As you’re reading this, chances are you have consumed some content from Wikipedia today. It powers the search results for Alexa and Siri. It has articles on everything from Snooker to the Vice President on the tv show “The West Wing”. It is the 5th most popular website in the world, and the only one run by a nonprofit. It is the largest encyclopedia in human history, one of the largest repositories of knowledge ever compiled, freely available, studiously accurate, and fueled by anonymous online commenters. It’s a modern miracle.

That’s not to say it doesn’t have flaws. Speaking as a longtime editor, it has plenty. And there are a lot of active debates about editorial decisions within it. But through a lot of hard work, we have mostly rendered those shortcomings irrelevant today. When you read a Wikipedia article, you are probably getting unbiased, factual information that would pass the sniff test of an expert in the field.

None of that was guaranteed from the outset. In fact, in 2004 or 2005 the reliability we now consider the base standard would be almost unimaginable. Like the primordial Earth, we went through a turbulent period of chaos and flame in our first years. From those vicious forum battles emerged consensus policies on the conduct of our editors and what could go into our articles. Those policies, and our devotion to upholding them, has protected us from the worst impulses of online discussion and made Wikipedia what it was today.

I wasn’t active in the community at the time: I was in the fourth grade. But the debates are still available, if you know where to look. And I can see the impact these policies have had, not only on the stellar quality of Wikipedia today, but through a case study in its opposition. Conservapedia.

You could be forgiven for never having heard of Conservapedia. Since it was created over a decade ago, it has gotten over 700 million views, slightly more than Wikipedia gets per day. But it is still our Doppelgänger, a reminder of what we could have been.

First, some history. Conservapedia was created in 2006 by a man named Andrew Schlafly. He was the child of Phyllis Schlafly, a conservative activist best known for helping sink the Equal Rights Amendment in the 1970s. His educational pedigree is impressive, going from Princeton to Harvard Law before becoming a conservative activist. He started editing Wikipedia in 2005, primarily focusing on his mother’s article and topics related to the pseudoscientific theory of intelligent design.

Note: a good metric for how contentious a Wikipedia article has been within the editor pool is the length of its talk page. Our article on Intelligent Design has the longest talk page on the whole goddamn site.

These edits had a clear point of view. Andrew Schlafly felt Wikipedia was being unfair to the Intelligent Design movement. As a result, he found very little traction, and in 2006, he split off and founded Conservapedia.

I will be generous to its content and say that it is at times not fully accurate. But it serves as an excellent case study, because it fundamentally diverges from Wikipedia in its interpretations of our three core content policies: No Original Research, Verifiability, and Neutral Point of View.

Before I begin, I want to make two notes. First, wiki articles are prone to change, so linking to them to prove a point is risky business. To avoid that, when I cite a Wikipedia or Conservapedia article, I will be linking to the version that was current as I was writing. That way, there’s no updating the articles after-the-fact.

Second, Conservapedia has a far smaller editor base than we do: as I write this, there are about 150 regular editors on the site. With fewer eyes, it’s harder to maintain the same kind of stylistic professionalism Wikipedia upholds. Out of respect to those editors, who are doing their job within the confines of Conservapedia’s mission, I’m not going to be discussing any spelling, grammar, or formatting errors. This site is meant to be a serious alternative to Wikipedia for people who feel we have failed to properly adhere to the facts. We’re going to treat them seriously.

Somehow.

Original Research

One of the problems Wikipedia faced early on is that our random anonymous editors may not be remotely qualified to talk about the topic they’re discussing. For instance, while I’m a pretty smart guy, I have only a limited education in political science. But I’ve added content to over a hundred articles in the topic. You don’t know who I am. You don’t know what my credentials are. Why should you trust what I write?

For that reason, we ban original research. Let’s say you’re a mathematician, and you discover a proof of some unsolved problem. You can’t then put that proof on the Wikipedia article for that problem, even though it is entirely mathematically correct. It’s original research: stuff you put together and found out yourself.

That changes once it gets published in a journal. We are scrupulous about our sourcing, and want every statement to be cited to an expert on the subject. Unlike most other outfits, we don’t prioritize primary sources. While they’re allowed, relying on them means we’re relying on our editors to interpret them, and find the most pertinent accounts from their own expertise. We prefer to outsource that and rely on secondary sources, analysis of those accounts by experts. That way, you don’t have to trust us.

Conservapedia explicitly does not share this policy. Scan through their list of how their policies differ from ours, and you will find this:

“We allow original, properly labeled works, while Wikipedia does not. This promotes a more intellectual atmosphere on Conservapedia. On Wikipedia, observations based on personal experience and interviews have been dismissed as “original research.” Here, we do not restrict research for articles in that manner.”

It is an understandable decision on their part. If you feel the reliable sources we use (mostly journalists) carry an unrecognized political bias that leads them to not be factual, you might want to do your own interviews. See also, “We do not allow opinions of journalists to be repeated here as though they are facts. Instead, we require authoritative support.”

The result of that decision, however, is that you give the opinions of your editors an undue level of weight. Scroll through their citations, and you start to find blogs, even on their most popular articles. To be fair, these aren’t just personal blogs from the writers. Mostly. Even with that caveat, however, none of these would qualify as reliable sources on Wikipedia. They might be accurate. They might be groundbreaking and innovative arguments that will be uncontroversial in a few short years. But we won’t cite them.

They also are fans of linking directly to YouTube videos. On Wikipedia, we prefer not to simply throw clips at people. Especially when they can be quite long and not searchable, we prefer to provide context. More notably, however, it would be original research. We allow the use of primary sources to cover only the most uncontroversial factual claims. For instance, you could use quotes from a novel to summarize the plot. Any interpretation of those sources, however, must itself be cited to another secondary source. We also do not allow what we call synthesis: basically, if I find an article that says Mike Pence was photographed buying a strap-on at his local sex shop, and an article saying men buy strap ons so their female partners can peg them, I still can’t use those to say Mike Pence secretly loves being pegged, and not just because no one wants to think about that. You can’t connect multiple reliable sources together to say something none of them individually said.

Conservapedia does this constantly, in a series of articles that have no counterpart on Wikipedia. I call them the “arguments” articles, and they are generally a list of points for, or refutations of points against, some topic in science or politics. The worst offender is probably that on Obama’s Religion, but you can also see it in Counterexamples to an Old Earth. I’ve linked you to a particularly clear example. Here, they cite a piece on declining SAT scores and a number of pieces of historical discourse to demonstrate that our intelligence has been going down over the last few centuries. None of their sources individually discuss this. That is the analysis of whoever wrote that bit of the article. Schlafly himself, in this case.

Now, there are drawbacks to Wikipedia’s approach. Our dedication to the secondary sources means that if they get it wrong, we get it wrong. Or, in the words of one of my favorite essays on Wikipedia policy, “If Wikipedia had been available around the sixth century BC, it would have reported the view that the Earth is flat as a fact without qualification. It would have also reported the views of Eratosthenes (who correctly determined the Earth’s circumference in 240 BC) either as controversial or a fringe view.

But that doesn’t make Conservapedia’s approach the right one. In this case, it’s led them wrong. While IQs have been decreasing recently in Europe and the US, this is a fairly recent phenomenon. And if they had consulted a few more experts before citing the civil-war letters and the Lincoln-Douglas debates, they might have learned that the 19th century was just as full of inane babble as today. After a hundred years, the babble gets forgotten, and the gems get preserved. The supposed greater intelligence of the past is an optical illusion. However commendable their interest in independently researching their subject matter, it’s caused them to factually report claims that are fundamentally wrong.

Verifiability

The second core content policy is pretty self-explanatory. As a rule, you should be able to independently verify any statement you read on wikipedia, solely by following our footnotes. We do not half-ass this: as I write this, the article on Donald Trump has 808 inline citations. While most articles aren’t that excessive, even fairly obscure articles like that of the Jurassic fish Leedsichthys will have a citation every hundred words.

We invest a significant amount of effort into making sure those references stay valid. When the url we’re citing ascends to the great firewall in the sky, we use the Internet Archive’s Wayback Machine to resurrect it. We’ve even created bots that automate the process, and when they see sources that are still alive but don’t have an archive backup, they’ll make one. Just in case.

We also do not, under any circumstances, allow circular citations: Wikipedia articles that cite other Wikipedia articles. Our articles are always subject to change as we learn more, so a citation claiming that we said something has no guarantee to still be a true citation in the future. That seems simple, but remember, there are a lot of outlets in the world that will just copy/paste content from our articles and report them as fact, and a lot of forks of our content to provide more reliability to other people.

XKCD fans will be familiar with the concept of citogenesis. That comic explains it better than I could, so just check it out. This is something we worry about a lot, and it’s happened several times already. Other editors’ experiences may vary, but far more of my time is spent verifying sources or looking for new ones than actually writing copy for any of our articles.

To be clear, we aren’t perfect: particularly in obscure entries that are only rarely read, things can slip through the cracks. And because we’re so beholden to the reliable sources we can find, our content gets biased towards subjects where information is available online. But that doesn’t change our general dedication to the principle, and efforts to apply it as best we can.

Conservapedia agrees with this idea in principle. In fact, verifiability is their first commandment. However, it adds a caveat through its fourteenth difference between it and Wikipedia.

“We do not require contributing editors to have to explain themselves constantly and justify every single edit to prove that it conforms to an exacting set of rules which are designed to suppress original thought, new ideas and penetrating insights.”

I sympathize with this sentiment: the many rules and guidelines for Wikipedia’s content make for good articles but a high barrier for entry, and it’s too easy to bully new editors with a combination of aggressive policy-citing and faux-legalese. But in practice, their solution makes for some pretty sparse references sections. As I write this, their article on the benefits of Capitalism has ZERO citations or footnotes, and has never had one for the entirety of its almost nine-year existence. This article is currently the second entry on their main page’s “popular articles” list.

It is far from the only example. There are a host of stubs, mostly denigrating liberals, which have no sources at all. It’s not just Liberal hyperbole, there’s Liberal dislikesGodless liberal, and Liberal hedonism, which apparently consists of Harlem Shakes and Miley Cyrus.

To be fair, these appear to be more like dictionary definitions, which don’t usually cite sources. But there is also an abundance of articles with nowhere near the amount of sourcing I would expect on Wikipedia. Their article on Theodicy, an important branch of religious studies, has only 2 citations. The Wikipedia equivalent has 110. Granted, our article is a lot longer and more in-depth (Conservapedia explicitly emphasizes brevity over thoroughness), but it’s still 6 times more citations per word. Their article on God has only 4 citations, equal to the number of citations in the first paragraph on Wikipedia.

They also don’t share our concern with circular citations. In their article on liberals, the reference that liberals support “Hatred” links to another Conservapedia article.

This is not just comparing bibliography sizes here. When you leave the references this sparse and discourage double-checking of uncited additions, things slip through the cracks. Take the Benefits of capitalism article again. That last point, the 10th, is complete balderdash. There are countless examples throughout history of Capitalist nations imposing Tariffs, restricting trade with rivals, and more. Free trade is a common aspect of Capitalist nations today, but as their own article on Donald Trump’s economic policy will tell you, they aren’t necessarily the same.

Circular citations, unsupported statements of fact, articles completely lacking in references, these are the side-effects of this relaxation of Verifiability standards. Their decision to allow these gaps may have made their site more welcoming to new editors, but it has resulted in considerably less reliable and consistent articles.

Neutral Point of View

Before we start shamelessly mocking Conservapedia’s political bias, it’s worth taking a moment to think about what a Neutral Point of View means in an encyclopedia. After all, if one person says pi equals 3 and another says it’s an irrational number roughly equal to 3.14, does a neutral point of view mean not saying one of those is right and the other is wrong?

Instinctively, you want to say that neutrality doesn’t apply there because the value of pi is a fact and not subject for debate (note: even that is way more complicated than you’d expect) . It’s an appealing argument. But there are a lot of reasons it doesn’t work in practice. Countless issues today, from Global Warming to the accusations against Brett Kavanaugh, stray into questions of what is factual and what is only “likely” or even “speculated”. There may be far stronger cases for one side than the other, and there may even be a consensus, but often there are detractors who historically have been right on occasion. How do you weigh those competing points of view?

We use the Principle of Due Weight. Essentially, we look at what the reliable sources say, in general, on the topic, and allocate space in the article to all the views they take, proportional to how widespread that view is. About 97% of Scientists agree that Global Warming is a real and human-caused phenomenon, so about 97% of the article on Global Warming is dedicated to that view. Well, sort of. It’s more complicated in practice but that’s our rule of thumb.

Again, we are not perfect in this regard. Wikipedia’s content still subtly reflects the perspectives of those who edit it, though usually more so in what we decide is notable enough to get an article, not the content of the article itself. We also are yoked to the consensus of reliable sources. Remember the flat-earth quote I had above. When the fringe views are actually right, we can’t reflect that.

Conservapedia sees those drawbacks as unacceptable failings. In their own words:  “We do not attempt to be neutral to all points of view. We are neutral to the facts. If a group is a terrorist group, then we use the label “terrorist” but Wikipedia will use the “neutral” term “militant”.

Already, I see real issues with their stance. The line between “terrorist” and “freedom fighter” is famously murky, and there is ample debate even within American political discourse about which groups count and which don’t. There is a reason Wikipedia goes with “militant” and then enumerates exactly which organizations label which militants as terrorist groups. Take the Muslim Brotherhood, for instance. As I write this, Conservapedia includes it (along with both the Ku Klux Klan and John Brown’s Anti-Klan Committee) as an example of a radical leftist terrorist group. That contradicts the US Government’s position on the group under the Trump, Obama, and Bush Administrations. While Russia and several Arab states classify it as such, most of the Western World views it as a legitimate political party, one which elected the president of Egypt for multiple years. It’s an Islamist group, and I disagree with its positions on most issues, but it’s hardly a terrorist organization.

Conservapedia doesn’t just think other ways, it says so as a matter of fact. The Muslim Brotherhood are terrorists. Ipso facto.

A list of all the articles in which Conservapedia abandons a neutral point of view to push a conservative agenda would be a list of all articles on Conservapedia. They keep a running list of everything horrible liberals apparently believe in, and feature it so prominently it’s above the table of contentsTheir article on Fake News explicitly defines it as a liberal-only phenomenon in the first sentenceFDR and the New Deal apparently prolonged the Great Depression, referring to a consensus of historians without citing one. They make no attempt to hide a clear political bias anywhere in their articles.

And to be clear, that’s not inherently wrong. If Wikipedia were to suddenly start erasing articles on the atrocities of Stalinism, I would want a more conservative alternative I could ride my flying pig to when I needed information.

It becomes wrong when that strong editorial position spreads ahistorical bullshit all over the site. Their article on William Shakespeare describes him as “anti-feminist”, as evidenced by the content of “The Taming of the Shrew”. This is an impressive trick on Shakespeare’s part, since feminism wouldn’t exist for another 2 centuries after he died. Alexander Hamilton is also a conservative now, which is at least slightly more defensible than Shakespeare. It is still, however, nonsense. He was a federalist with nationalist tendencies, and does not fit neatly onto the left-right political spectrum of 21st century America. Just like every other great thinker who was born, lived, and died hundreds of years before any of us were born.

Nor are they limited to revisionism of the distant past. They describe George Soros, a Hungarian Jew who barely survived Nazi Occupation, as an anti-semite who worked for the Nazis. This particular conspiracy theory is an Alex Jones special which has been boosted by everyone from Glenn Beck to authoritarian president Viktor Orbán of Hungary. It is virulently antisemitic itself, and is part of far-right efforts to blame The Holocaust on Jewish collaborators.

It is discussed without context in the introduction to his article.

Still more harmful is their firm advocacy of conversion therapy. They describe it as an effective and charitable course to correct a mental illness, citing a 2007 study with crippling methodological flaws over the overwhelming consensus of the community of psychologists to claim that it is something other than a hateful and traumatizing form of pseudoscience.

And then there is Global Warming. It is by far the most important issue facing us today. The future of the human race rests on what we do about it in the next 20 years. Yet Conservapedia would have you believe that the science is inconclusive at best and against it at worst. I won’t pretend that this exercise in Young-Earth-Creationist navel-gazing is the source of the right’s climate change denialism. Yet it is still a small contributing factor to the most damaging aspect of the right’s platform today.

The danger of giving full room to fringe theories because the consensus might be wrong is that you are now subject to your most conspiratorial writers. However much I might sympathize with their desire for a Wikipedia that would recognize Galileo in his own time, it has caused their content to reflect the worst of the far-right’s paranoia and hatred.

Conclusion: Why Any of This Matters

It may seem like nothing is at stake here. After all, Conservapedia is extremely fringe even among the Christian right. Steve Bannon and Richard Spencer may believe some hateful shit but even they don’t think the Earth is 6000 years old. It has barely a hundred regular editors, and shows clear signs of minimal activity over the last several years. Remember that Liberal Hedonism article, that talked about Miley Cyrus twerking and the Harlem Shake? There’s a reason it uses examples from mid-2013: It hasn’t been touched by a human being in the last five years.

It’s not completely irrelevant today. It got roughly 50 million more views in the last year, over 100,000 per day. It’s an endless fount of material for people like me to mock, and I’ve barely scratched the surface there. But I see something deeper in this tragedy of an encyclopedia.

Wikipedia’s content and community have survived the worst ravages of the internet, everything from Russian attacks to Gamergate, because our community of editors remains dedicated to a shared set of basic principles. We can disagree on content, style, weight, and more, but we are still working for the same goal. We are not a forum. We are not a newspaper. We do not have an editorial section. We are an encyclopedia. And every participant is onboard with that shared vision.

Conservapedia is a case study in how that vision dies. It began when one editor abandoned the site to create a parallel version that would not contradict what he knew to be right, even when the reliable sources said he was wrong. I know what he felt when he did it. I feel the same way when the decisions those policies make go against my morals. But I didn’t make Shankipedia, nor did any of the countless other editors who have disagreed with on-Wiki consensus, because we remained committed towards the first principles of the site.

However virtuous his motivations may have been, the result is an abomination. What was meant to be an alternative to Wikipedia has collapsed under the weight of its own bias. It is unreliable and inaccurate. It promotes hateful conspiracy theories without qualification. What little is left of its community is toxic and dictatorial towards anyone who disagrees with even minor aspects of their ideology. They are, in short, exactly what you’d expect if you trusted the writing of an encyclopedia to an online forum.

None of that was unpredictable. It is simply what happens when the mods prioritize their personal agenda over what is best for the community.

The story of Conservapedia is a cautionary tale. It is a reminder that platitudes like “Neutral Point of View” can carry a lot of weight when we all believe in them. It is proof that these core principles can work miracles, even on the most fraught and contentious topics. And it is a live demonstration of what happens when we abandon those core principles as soon as they prove inconvenient to us.

1 in 5: a VERY deep dive into campus sexual assault statistics

1 in 5 women will be sexually assaulted at some point during their time in college. It’s a shocking number, one that’s led to a lot of agonizing and discourse across the political spectrum and a variety of reforms put in place on campus. As it should. There is no society in which a statistic like that should be acceptable.

It’s also led to a lot of scrutiny from people who do not want to believe that sexual assault is such a problem in our universities. These people, mostly conservatives, point to a wide variety of perceived flaws in the original study to discredit its findings. They point to other studies with different methodologies that contradict the number. They accuse the authors of fudging the data to promote a political agenda. Debunking this study is a minor pasttime in the right-wing media bubble, like shuffleboard or badminton. But do their critiques hold water? What’s the truth buried in the data?

Before we begin, two warnings: I’m not going to be double-checking their regression analyses here, but there’s no way to talk about this without covering at least a little math. So if you’re one of these people who can’t handle numbers, now would be a good time to leave. More importantly though, I’m gonna be touching on some heavy shit here. There won’t be any graphic descriptions or stories. This is all numbers. But if that isn’t your thing, don’t feel bad noping out of this one.

1. The Survey

Generally speaking, when people cite “1 in 5”, they’re referring to this study by the department of justice. There are a lot of others that reach basically the same results, but all the ones I’ve seen use essentially the same methodology and weighting, and find similar results so I’m gonna focus on it.

Basically, they took two unnamed large Universities, one in the south and one in the midwest, and emailed every student there the same survey asking about their history with sexual assault. They broke it down between forced (i.e. violent) and incapacitated (i.e. drunk) sexual assault, while excluding suspected but not confirmed accounts in the latter category. So already, there’s one way the numbers could be HIGHER than currently reported: not every victim is gonna be sure about what happened. They also looked at trends in attempted vs. completed, and a number of other things.

After some weighting, they found that 19% of women reported experiencing an attempted or completed sexual assault during their time in college: 12.8% for attempted, 13.7% for completed. If you read YouTube comments (and you shouldn’t), you’ll see people use those numbers to argue that the study is somehow fraudulent: 12.8+13.7=26.5, not 19.0. Because apparently you can’t experience both. This is another way that it understates the total rate of sexual assault at universities, though it wouldn’t change the top line number: they only ask if someone has experienced these things, not how often. This is common across most of these surveys.

There are other interesting findings in the data, some more surprising than others. It’s not uniformly distributed through time: there’s a distinct “rape season”, roughly corresponding with fall semester. It peaks in September-October. More than half of all sexual assaults are committed on Friday or Saturday, which makes sense since the most common location for it is at a party. All of those are more pronounced for incapacitated sexual assault than forced, by the way.

The highest reported percentage is among seniors. There’s a credible argument that you should only be looking at them, because counting freshman in prevalence rates across the entirety of the college experience seems dumb, but there’s a real risk of people forgetting about incidents earlier in their studies, or becoming less willing to count it as the victimization fades. Freshman and sophomores are the most likely to be experience this, so it’s important to include them. And before you say “who the fuck forgets being raped”, only a QUARTER of incapacitated sexual assault victims classified their experience as such in the survey.

That’s roughly what it covers. I’m going to move on to the flaws and tradeoffs in the study in a moment, but first I want to point out something that really bothers me. You might heard some variation of “1 in 5 women and 1 in 27 men” in one of these articles or consent workshops. That’s not what the study finds. They found that 6.1% or roughly 1 in 16 men had been a victim of sexual assault. I’m not sure where the 1 in 27 number comes from, but it’s exactly what would happen if you used this study as a source, then only counted completed sexual assaults for men and both attempted and completed assaults for women. If anybody knows better, please send me sources because I want to still have faith in humanity.

2. Shortcomings in the Dataset

While this study is good, it’s not perfect. There are several real issues with how it handles the numbers, and where it draws them from, that should be concerning to anyone relying on them. That’s not to say it’s bullshit: these flaws are natural byproducts of good-intentioned decisions on the part of its authors. If they had done things differently, they would have just had other problems.

There is no way to get a perfect survey on a subject like sexual assault. Anyone who claims they have one isn’t arguing in good faith.

First off, let’s talk about the dataset. I’ve already snuck in one issue with it: the choice of universities. The authors only looked at two institutions in the country, and while they were geographically distinct, they were demographically similar. They were both large, with 30,000 and 35,000 students each. The results may therefore not be representative of the experience of significantly smaller universities. While there are counterparts which HAVE looked at these colleges and found similar numbers, with a smaller college comes a smaller sample to draw on, resulting in noisier data. You can mitigate this somewhat by including even more universities, but because of the significant overhead involved, most papers either use a smaller sample or make do with a lower response rate. More on that later.

The other issue is that they excluded all students under the age of 18. They kinda had to: otherwise they’d need to get parental consent for those people to respond. I’ve heard credible arguments that this exclusion could bias the results towards overestimating AND underestimating the prevalence. It’s hard to say. Either way, their absence is significant: between them and other groups excluded from the study, only half the enrolled in either university were ever gonna be included in the data. With no information on the other 50% at all, it’s hard to say what effect, if any, this might have.

3. Shortcomings in the Procedure

The authors of this study didn’t fly out to these colleges and personally interview over 6,000 students. They sent each participant a survey via email and had them fill it out online. Data collection of that form tends to get a low response rate. After all, how likely are you to respond to a random email asking you to fill out a questionnaire? And indeed, that’s what we see: response rates of about 40% at both universities, higher for women and lower for men.

That would be fine, if who responds to a survey and who doesn’t were random. But we know that isn’t true. Racial and ethnic minorities consistently under-respond to polls of all forms, and online polls in particular tend to include more people for whom the subject matter is relevant. That factor can lead to significant and at times catastrophic overestimates of relatively rare phenomena.

Put another way: if you have been sexually assaulted, you are more likely to be interested in a survey about sexual assault than if you have not been. You’re more likely to read the email, and you’re more likely to fill it out.

There are a lot of conflicting factors here. Victims of sexual assault may be less willing to answer, out of fear that they might be shamed or to avoid having to answer uncomfortable questions. There are any number of ways for the topic to be important to you without being an actual victim. You might know a friend, for instance, or simply be engaged with the topic.

But there are some aspects of the study that suggest there was an effect here. The response rates for men and women were markedly different: 42% for women, and only 33% for men. We also know that men are less likely to be victims of sexual assault. In fact, this is a consistent pattern across the board for studies that found a result somewhere in the 1-in-5 range. They’re mostly online surveys sent to students, and they almost always have a higher response rate among women than men.

Here’s where it gets complicated. There are ways to account for non-response bias, at least partially. The scientists who put this study together used three of those ways.

First, they compared the demographic information in their survey respondents to that of all people who did not respond, and to that of the university as a whole. Wherever there was a demographic discrepancy, they gave more weight in the results to people underrepresented in the survey. For instance, nonwhite students were less likely to respond, so they counted the answers from nonwhite students who DID respond more.

They weighted by four factors: which university they were in, their gender, their year of study, and their race/ethnicity. That list is pretty sparse. Most surveys would get a lot more demographic info on each person, and then figure out what to weight from there. The problem is that it’s hard to balance that extra information with guarantees of anonymity. Especially with a topic as fraught as sexual assault, it’s crucially important that participants don’t feel their answers might get connected back to them. Even without the ethical concerns, it can lead to lower response rates among people who HAVE been assaulted. Surveys without the same dedication to anonymity report significantly lower numbers, sometimes below 1%. So this is kind of a damned-if-you-do situation.

Second, they used something called the “continuum of resistance” model. Basically, it says that whether or not someone is willing to answer a survey isn’t a binary thing: the less likely you are to respond to it, the more likely you are to put off doing it. In other words, the demographics of the people who took the longest to fill out the survey probably match those of the people who didn’t fill it out at all, and their responses are probably similar.

This effect doesn’t always show up, but it looks like it did here. Nonwhite students were more likely to not answer the questions, and also (somewhat) more likely to be a late responder. They found no significant difference in answers between late and early responders, which suggests that whatever nonresponse bias existed was fairly small.

The third method they used is less reliable. Essentially, they did a follow-up survey of all people who didn’t respond to the first one (note: they still knew who did and didn’t respond because respondents got a small cash award and they could see who collected it, though not which responses corresponded to which person), and asked them why they didn’t respond. Most nonrespondents said they’d either never received the emails or weren’t sure if they had, and only a very small number said they didn’t respond because they hadn’t experienced sexual assault.

Personally, I wouldn’t have even included this section in the study. The response rate for this follow-up was abysmal: barely 10%, compared to nearly 40 for the top level. It also will exhibit the same kinds of biases the first one did. For instance, people who would be interested in the first study but just didn’t see it in their inbox will be more likely to respond to the second one than people who weren’t interested at all. I mean, do you want to fill out a questionnaire about why you don’t want to answer another questionnaire?

All in all, the authors of this study were meticulous and honest with their findings. They crafted their study to prioritize the privacy and comfort of their respondents, they were forthcoming about potential sources of error, and they made good-faith efforts to adjust for those sources wherever they could. I’ve read crappy studies, and I’ve read fraudulent studies. This one looks nothing like those.

However, there is only so much the authors can do to adjust for these factors. Their selection of methodology inherently comes with certain errors that are nearly impossible to correct. And while there is an argument that sexual assault victims would also be less likely to respond due to discomfort, the fact that there are many more nonvictims than victims means that even if that were true, the numbers would still probably be an overestimate. While the findings here are valuable, they are not gospel, and it’s likely they are inadvertently highballing it.

3. The Other Options Suck Too Though

Online surveys of university students are not the only way to answer this question. Conservatives often cite two other studies, both done by the government. The first is the FBI Uniform Crime Report, which isn’t a survey at all. It’s a thorough accounting of every crime reported to the police in a given year. They generally find somewhere around 100,000 reported rapes to have occurred each year, total, implying an almost minuscule percentage on campuses.

If you’ve made it this far into the post, you’ve probably already seen the problem with that sentence. The reporting rate for rape is really, really low. Only about a third of rape victims inform the police. And it gets worse. Until 2013, the UCR used the word “forced” in their definition of rape. If it wasn’t forced, it wasn’t counted. That would exclude many cases of coerced sex and even some cases of violent, forced sex (for instance, the people reporting it to the FBI won’t necessarily count marital rape, because people are awful).

One of my first jobs ever was data prep for the sexual assault division of my local District Attorney’s office. Even within the prosecutorial community, the FBI numbers are seen as comically low. We didn’t use them.

Instead, we relied on the National Crime Victimization Survey, the other source conservatives like to draw on. It accounts for the low reporting rate because it’s an actual survey of a randomized sample. It’s done through in-person or phone interviews, both of which significantly reduce the interest-bias you find in their online counterparts (you’re more likely to answer the questions when there’s a person on the other end). And it finds that roughly half a million rapes occur each year. More than the UCR, but it would still be less than 1% for women on campus.

It has its own problems, though. The NCVS generally just asks “have you been raped?” or some variant, which we know from countless other studies doesn’t cover all or even most sexual assault victims. It’s likely that the NCVS is significantly lowballing the numbers as a result. They’ve tried to adjust for that in recent years, but most researchers outside the Bureau of Justice Statistics don’t think they’ve done enough, and I’m inclined to agree. Additionally, because the NCVS is explicitly done by a government agency, survivors will be less likely to respond to them for the same reasons they don’t report their assaults to the police. Think of it as the other side of the 1-in-5 studies. They are equally methodical, but where one errs on the side of overestimating when there’s a tradeoff they have to make, the other errs on the side of underestimating.

There are other studies, using some combination of in-person and phone interviews, online results, and other metrics, and different ways of determining whether or not a subject has been assaulted. Their results are all over the map, but tend to fall somewhere in between the NCVS and the 1-in-5 study. They also tend to fall on the high end of that range, so the real number is probably closer to 1-in-5 than to the <1% the NCVS reports. It could be 10. It could be 15. We can’t be sure.

4. Why We Don’t Have a Perfect Study

By now, you might be thinking “okay, so why don’t we pull together some academics, do in-person interviews at a few dozen representative universities, and get some unimpeachable numbers?” After all, it’s not like any of the issues of these studies are inherent. There’s no law that says only the government can use direct sampling or you have to do everything online if you’re talking to college students.

The real obstacle here is money. Online surveys are prevalent because online surveys are cheap. Email is free, so the main expenses are a few grad students to crunch the numbers, the salary of whoever makes sure the study is ethical, and whatever incentive you give people for participating. That 1-in-5 study probably cost about $75,000.

For in-person or phone interviews, you have to pay people to ask the questions. The more folks in your sample, the more people you have to pay for longer. Then you have to vet those people to make sure they know what they’re doing and won’t influence people’s responses. And you have to pay for travel times to make sure those people get to the various campuses. And you have to figure out how to turn their responses into data for the computer which means either expensive Scantron machines or paying more people for data entry and then there’s the privacy concerns, because HTTPS doesn’t exist in the outside world, so somebody has to oversee the data entry….

You get the idea. All told, a study like that one could easily set you back $15 million. That’s more than the total budget your average Sociology department gets in a year.

There are also ethical concerns. Direct interviews may have a higher response rate, but they can also take an emotional toll on sexual assault victims who will have to discuss their trauma with a complete stranger. Science is not done in a vacuum (except for Astronomy), and you have to be careful not to hurt the very people you are studying in the process of learning from them. Additionally, $15 million is not a small amount of money to throw at a problem. It’s hard to justify spending that much on a fact-finding mission instead of, for instance, paying for every untested rape kit in the state of California. There are better ways to allocate our resources here.

5. Why Is This What You’re Fixated On

These numbers get complicated, but at this point it’s fairly clear that the 1-in-5 statistic is not as reliable as we assume it is. It’s probably too high (note: while it’s less likely, it could also be too low), and when accounting for systemic errors it’s probably somewhere in the 1-in-10 to 1-in-6 range. Where you think it lands depends a lot on what specific choices your preferred researchers made when handling the technical details of their study. Even the 1-in-5 authors believe in a much more nuanced take on the data.

That’s a good thing. Your average discourse in the media and in our political forums will always be more simplistic than the careful quantitative analyses of peer-reviewed journals. Scientists and scientific studies will disagree with each other based on their particular decisions over their particular methodologies. And while we don’t know for sure what the percentage is, we’ve narrowed it down quite a bit.

Specifically, we’ve narrowed it down to “too damn high”. 1 in 5 is too damn high. 1 in 10 is too damn high. 1 in 20 is too damn high. Even the more conservative studies outside the NCVS give staggeringly high totals of sexual assaults in our universities. We may not know exactly, quantifiably how bad the problem is, but we know that it’s bad, and warrants immediate action.

But the critics of this study seem to think otherwise. They seem to think that if there are flaws in this paper, then there’s no problem at all. They believe that because the studies we cite can’t guarantee us total certainty, there is no value in what they say. It is the worst sort of scientific illiteracy. Even if you allow for significant errors, and if anything I’ve been too harsh on the original paper here, the numbers would STILL be staggeringly high. You could assume that there was not a single sexual assault victim in either of the two universities who didn’t fill out that survey, and you’d STILL find that about 3% of women were assaulted during their time there.

The science of accounting for sexual assault on campus is tricky and imprecise. There is a lot of room for careful critique of the numbers we have, and many questions for which we don’t yet have answers. But don’t let those uncertainties become a smokescreen for what we do know.

The Wild and Wacky World of Christian Media

When I was in high school, I was browsing through the sci-fi section of a local book fair when I came across two books that sounded right up my alley. I still remember the first sentence of the blurb: “In one cataclysmic moment, millions around the globe disappear.”

The series was called “Left Behind”.

I didn’t make it past the first hundred pages. The authors had this weird need to add every character’s relation to religion into their introduction, and it bothered me. That was fine. My bookshelves are packed with mediocre speculative fiction that I’ve opened only to close a half an hour later.

It was only when I mentioned them to my parents that I learned that these weren’t just badly written sci-fi. The Left Behind series was the evangelical christian equivalent of Harry Potter, a massively bestselling book series telling a story that was as much prediction as fiction. I gave them a quick google, went “huh, interesting”, and moved on with my life.

I didn’t think about the Left Behind series again until 2014, when YouTube recommended to me that I watch the trailer for… Left Behind, the movie? Starring NICOLAS CAGE???

What followed was a years-long dive down the rabbit hole that is the Christian Entertainment Industry, culminating in the post you’re reading today.

A few minutes after I watched that trailer, I learned that not only was this movie not a fever dream spawned by an unholy combination of final exams, too much Fox News, and an ill-advised drunken viewing of The Wicker Man, it wasn’t even the first Left Behind adaptation on film. There was a trilogy from the early 2000s, covering the first three books of the sixteen-part series. They had been created by a company called Cloud Ten Pictures, which, according to Wikipedia, specialized in Christian end-of-the-world films.

But for a company to specialize something implies that there are other companies that don’t. This in turn, led me to the many, many corporations over the years dedicated to creating literature, music, and of course, film, exclusively by and for the Evangelical Christian movement.

I want to begin by clarifying that I have not seen most of the movies these studios put out. There are hundreds, perhaps thousands of them, and especially with the oldest ones they can be pretty hard to find. But I have seen most of the ones which are on YouTube, which it turns out is a lot of them. And I see three eras, as distinct in tone and style as the talkies were from silent pictures are from the technicolor blockbuster.

1951-1996: Prehistory

Like most movements, you can pick almost any date you like for when the Christian Media Boom began, but I place its foundations on October 2nd, 1951. That’s when evangelical minister Billy Graham’s brand new production company, World Wide Pictures, released Mr. Texas, which Wikipedia tells me was the first Christian Western film in history. I wasn’t able to find a copy of Mr. Texas, but I did find a copy of the second Christian Western ever made, a flick called Oiltown, USA. It’s a classic romp about a greedy atheist oil baron who is guided to The Light by a good-hearted family man named Jim and a cameo by Billy Graham himself. It was made in 1953, and it shows, right down to the black housekeeper whose dialect is two steps removed from “massa Manning”. It takes a ten minute break near the end to show us an unabridged Graham sermon, in its entirety. And I have to tell you, that on its own makes the whole 71 minutes worth watching. He delivers his soliloquy with what I can only describe as a charismatic monotone. The same furious, indignant schoolmaster chastises me with “When Jesus Christ was hanging on the cross, I want you to see him, I want to see the nails in his hands, I want you to see the spike through his feet, I want you to see the crown of thorns upon his brow!” and angrily comforts me that “Christ forgives and forgets our past!” It’s the funniest thing I’ve seen in ages, and yet it’s so earnest, you almost can’t help but be moved.

There’s actually a lot of room for thematic discourse on Oiltown, despite it having fallen so deep into obscurity it’s got a total of one review on IMDb. Halfway through, there is a scene where Les, the oil baron, pulls a gun on one of his employees in a scuffle, before being disarmed by Jim, the Right Proper Christian of the show. The gun, which since “The Maltese Falcon” has been used as a stand-in for the male phallus as a symbol of masculine power that won’t get an NC-17 rating, is first brandished by Les. He is disarmed, or metaphorically castrated, by Jim. Jim engages in christian charity and returns Les’s manhood to him by placing it on the desk. Our Christ stand-in both gives and takes away power, in equal measure. “Christ forgives and forgets our past”, after all.

There’s also a weirdly anti-capitalist message buried in here, though it’s mostly post-textual. While it’s implied that Les is an adulterer and he clearly is capable of murder, the main sin we see in him is greed, manifested by a corporate desire to make money at the expense of a soul. “Here’s my god, two strong hands, a mind to direct them, and a few strong backs to do the dirty work!” he proclaims to Jim when they first meet. It’s unlikely a less christian movie could have gotten away with such progressive messaging in 1953, the same year Joe McCarthy got the chairmanship of the Senate Permanent Subcommittee on Investigations. But I digress.

More importantly, it sets up a few themes we will continue to see in christian film later on. The first, and most glaringly obvious, is the anvilicious message of “christians good, non-christians bad”. The only explicitly non-christian character in the movie, Les, is an adulterous cash-obsessed robber baron who gambles and blackmails and nearly commits murder. Contrast that with Jim and Katherine, the golden couple who can do no wrong, and you  see a deeply manichean worldview: for all Billy Graham’s talk that “there is not enough goodness you can do to save yourself”, in his world, good works are the unique domain of the christian people.

Second, you should note that even back then, it’s not enough for Les to simply be an evil man. He is also actively hostile to the church. He sees it as nonsense that he must protect his daughter from at any cost, and says as much to Jim. In the context of the story, that makes sense. This is a Beauty and the Beast tale, with Les as the Beast and Billy Graham as a strangely-cast Belle. For his arc to be how he becomes a good person through Christ, he has to begin at a point of maximal disbelief. But it leaves no room for the quiet disbelief or genuine faith in other religions embodied by most non-christians and even christian non-evangelicals. Sadly, that’s the beginning of a long trend.

World Wide Pictures continued to pump out movies for decades after those first hits. Most of them are rehashes of the same ground broken by Oiltown, USA: a man goes through some rough ordeals and is ultimately saved by God and Graham.

 

Billy Graham’s outlet stayed the same, but the world around him changed. His brand of evangelicalism was strictly apolitical, and he had cozy relationships with democratic and republican presidents alike. He happily fought for anti-sodomy laws and a swift response to the AIDS crisis in equal measure. His brand was driven by gospel, not republicanism.

That strain of evangelical christianity died with Roe v. Wade, and the cultural upheavals of the 1960s and 70s. The dominance of traditional christian values had been under threat in America for years, but now, we had legalized what in the eyes of the church could only ever be baby murder. The response was a more political religious tradition, and the birth of modern Christian fundamentalism.

This tradition, while still hailing Graham as a forefather, owes just as much to Barry Goldwater and Ronald Reagan for its existence. It believed, and believes to this day, in young- or old-Earth creationism as a valid scientific theory that should be taught alongside evolution in public schools. It rejects the notion of the separation of church and state, arguing that the universally-christian founding fathers formed this country as a christian nation and discounting the words of Thomas Jefferson and the first amendment alike. It rejects the liberal attitudes toward sex that had become popular in the 60s, and perceives promiscuity, homosexuality, and “sexual deviancy” writ large as threats to the very fabric of society. And that’s just the tip of the iceberg.

As of 2017, 80% of evangelical christians subscribe to a religious theory called dispensationalism. Among its tenets is the belief that the second coming of Jesus Christ is tied to the establishment of the state of Israel, and that jews, all jews, must live there before Christ can return to end the world and sort all remaining souls into heaven or hell. What’s more, they believe that it is their duty to bring about that outcome as quickly as possible: they wish to be vindicated in their beliefs. This manifests in rabid support for Israel, and particularly for the Israeli right-wing and its West Bank settlers. Don’t mistake that for support for the Jewish people, however. More on that later.

It also left many of its members open to the worst impulses of the American right. There has always been a paranoid streak in our society, all the way back to the Alien and Sedition acts . Even Billy Graham himself railed against the creeping influence of communism. Under the leadership of his son, Franklin, and the former segregationist/Moral Majority founder Jerry Falwell, those impulses were magnified into widespread belief in nationalist, even white nationalist conspiracy theories. Many evangelicals genuinely fear the dawn of a New World Order, seeing the seeds of it in the globalist economy and the formation of the UN. They see the influence of “international bankers”, the creation of shared currencies like the euro, even the Catholic church as vectors for the Antichrist’s eventual rise to power.

 

The prehistoric era of the christian film industry ended in 1994, through the actions of a man named Patrick Matrisciana. Sixteen years before, he had created a christian production company called “Jeremiah Films”, which is still around today. As I’m writing this, its home page prominently displays a trailer for a “documentary” claiming that the Antichrist may be either an alien or a sentient artificial intelligence.

I will also note that its homepage redirects to its store.

Whatever diminished form it may take today, in 1994 the company, or at least its founder, had a far wider influence. With distribution help from Jerry Falwell, Matrisciana produced the documentary The Clinton Chronicles, ostensibly through a group called “Citizens for Honest Government”.

The film is available in its entirety on YouTube, complete with an opening informing us that “unauthorized duplication is a violation of federal law”. Beyond the standard and largely credible accusation of philandering on the part of Bill Clinton, it also claims that he and Hillary engaged in a widespread conspiracy to traffic cocaine through Mena Airport in Arkansas, partook in it themselves, sexually abused young girls, and murdered anyone who threatened to expose them. It relied heavily on the testimonials of a man named Larry Nichols, a disgruntled former Clinton employee who had been fired for malfeasance in 1987, when Bill was still governor of Arkansas. It also paid over $200,000 to various individuals to support his claims. While Matrisciani denied that Falwell had anything to do with the payments, that only leads me to wonder where the money came from instead.

According to the New York Times, about 300,000 copies of The Clinton Chronicles made it into circulation. The general audience for the conspiracies it introduced to the world numbered in the tens of millions.

At this point, the christian film industry had crossed the event horizon. The Clinton Chronicles was an undeniably secular, completely political film, paid for by church dollars with the intent to take down an opponent of the moral majority. And it had gone viral in ways that no prior production from the industry had, all the way back to the days of Oiltown and Mr. Texas. Not only had their message reached a far wider audience by humoring their most paranoid impulses, they had uncovered a way to make large sums of money doing so. And so, at long last, we reach the films that introduced me to this world of mysteries.

1996-2007:  The Left Behind Era

The last years of the 20th century were a perfect breeding ground for paranoid conspiracies. The world was undergoing radical social change at a previously unimaginable pace, but more importantly it was uniting. This was the age of Fukuyama’s “The End of History”, when liberal democracy seemed the future across the globe. We saw the formal creation of the EU in 1992 and the Euro in 1999, saw phrases like “made in China” go from obscurity to ubiquity, saw the dawn of the internet in the public consciousness. And with the collapse of the Soviet Union, there was no longer an easy villain on which to foist our cultural ills.

With such societal upheavals comes a natural fear of the changes they bring. The same impulses which today fuel the nationalism of Donald Trump and populists across the globe stoked the fears of the Christian right in 1996. The idea of consolidating the whole of humanity under one financial and political roof meant no more America, the Christian nation. While I doubt many in the Christian right thought in those terms, there was very much an undercurrent of worry that one day you might wake up and not recognize your own country anymore.

Combine this with a religious fascination with the end of the world, and the already-established belief that this end, with its all-controlling Antichrist, was nigh, and you have a recipe for not just Clinton bashing but full-fledged devotion to conspiracy.

These conspiracies defined the second great epoch of the Christian Media Industry. Like the campy sermons of Billy Graham’s heyday, it centered around a single figure: The Left Behind series.

 

Left Behind started in 1995 as a book series by the minister/writer duo of the late Tim LaHaye and Jerry B. Jenkins. There are twelve books in the main series, plus three prequels, a sequel depicting life after the Second Coming, a spinoff series of children’s books, a Real-Time Strategy game a la Starcraft which gives you the option to play as the Antichrist but not to train women in roles other than nurse and singer, and of course, four movies.

Today, it has sold 65 million copies, more than Little House on the Prairie.

The plot is an exercise in Eschatology, the study of biblical accounts of the end of the world. The rapture happens in the first book, taking all true christians and all children from the earth. The remaining word count is devoted to the titular folks who were left behind as they struggle through the seven years of tribulation before the Second Coming of Christ. The Antichrist is Nicolae Carpathia, originally the Romanian President but who, with the initial backing of two “international bankers” soon becomes the UN Secretary-General and later the leader of a group called the Global Community. The Global Community is a totalitarian regime only one or two steps removed from the most fanatical ideas of the New World Order. It has a monopoly on the media through its news network, GNN, it kills anyone that gets in its way, and most importantly, it establishes a new global religion to supercede all others, including Christianity. At first, this religion is called “Enigma Babylon One World Faith”. No, I am not making that name up. Later on, when Carpathia is assassinated only to be resurrected and possessed by Satan himself, this is replaced by “Carpathianism”.

Earlier, I compared these novels to the Harry Potter series for the religious right, and while that comparison is apt in terms of magnitude, it is not accurate in terms of what it means to its audience. Harry Potter was nothing more or less than a work of fiction. It was well-written, easily accessible escapism.

To the religious right, Left Behind is more like The Hunger Games: a stylized representation of our own anxieties about the future. I doubt that any evangelicals believe that the antichrist will be named Nicolae Carpathia, or that a pilot named Rayford Steele and a journalist named Buck Williams will be the keys to our salvation. But as of 2013, 77% of evangelicals in the US believe we are living in the end times right now. For much of Left Behind‘s multimillion-strong audience, this is not theoretical.

I have not read much of the Left Behind series. My knowledge of its text and subtext is largely limited to the extensive and lovingly-written wikipedia summaries of each book. This is enough to tell me such critical details as the type of car Nicolae and the false prophet, Leon Fortunato, use to flee Christ when he finally returns in Glorious Appearing, or the fate of satanist lackey Viv Ivins (Get it? VIV IVIns, VI VI VI, 666?), but not enough for me to feel comfortable discussing their thematic undertones.

I have, however, seen the films. Created in the early 2000s by Cloud Ten Pictures, they weren’t even the first foray into christian armageddon by the studio. They were preceded by the four-part Apocalypse series, which covered the same ground but without the Left Behind label. Note that the first three Left Behind novels had already been published before Apocalypse: Caught in the Eye of the Storm was released, so these movies likely owe their existence as much to a rights dispute as to the artistic vision of its creators. As for the movies themselves, I cannot do the experience of watching them justice in words. You have to see this shit for yourself.

By 2000, however, Cloud Ten had worked out the kinks and were ready to move forward with their film. It, and both of its sequels, are currently available on YouTube. If you have the time, I encourage you to watch them. I don’t know how it feels to be a born-again christian watching them, but for someone outside the tribe, they are the epitome of “so bad, it’s good”.

That is, until you remember that millions of people out there are dead serious in their appreciation for this story. It departs the realm of the realistic only a few minutes in, when it portrays an immense squadron of what appear to be F-15 eagles invading Israeli airspace and attempting to carpet-bomb Jerusalem.

Whatever your opinions may be on Israel, the threats to its existence don’t take the form of American-made fighter jets flattening cities. Yet the reverent tone in Chaim Rosenzweig when he says “no one has more enemies than Israel” tells us that this sequence is not an over-the-top farce or spoof. This movie genuinely wants us to believe that this is what life is in Israel.

In three movies and 270 minutes of screen time, there is not one mention of Palestine or the Palestinian people. There are only passing references to what “The Arabs” might allow with regard to the Temple Mount.

Earlier, I cautioned you not to confuse the evangelical movement’s support for Israel with support for Jews. By the second movie, you begin to see why. The climax of the film centers around an announcement by rabbi Tsion Ben-Judah in Jerusalem, “the world’s most knowledgable and respected religious scholar”. This announcement is ultimately revealed to be that he now recognizes Jesus Christ as the messiah, in light of overwhelming biblical evidence. This image, of a renowned Jewish scholar renouncing his faith to support the ideals of born-again Christianity, is how LaHaye and Jenkins see the Jewish people: Christians in all but name, only a few short steps from embracing the True Light.

To be fair, in that regard the Left Behind series is an equal-opportunity offender. The dichotomy of Christians good, non-Christians bad first seen in Oiltown has reached its logical extreme here. Every person who is remotely reasonable or kind is either a born-again Christian or becomes one by the end of the series. The general dickishness of Les Manning has now become lip service to the literal Antichrist.

Perhaps the best evidence for this worldview is the depiction of the events immediately after the Rapture: as though freed by the absence of Evangelicals to release our inner beasts, those left behind immediately turn to violence and looting. A main character’s car gets stolen off the freeway (what happened to this looter’s car? did he just leave it on the interstate?), Martial law is declared within 12 hours, the world falls into chaos to an almost comical degree. Some of this is real, in that if hundreds of millions of people disappeared in an instant there would be vast upheavals, but humanity’s experience with natural disasters has told us that there are usually as many good samaritans as there are opportunists.

Not so in Left Behind. Recall what Billy Graham said 47 years before: “there is not enough goodness you can do to save yourself”. The good samaritans were all raptured, because all the good samaritans were Christian.

These films, like The Clinton Chronicles, openly embrace the paranoid impulses of their audience. The Antichrist rises to power through the influence of the global financial elite, manipulating and contorting the UN into a vehicle for his ambitions. This is explicitly tied to the drive to unify the world under one government, one language, one currency, and one culture. When Carpathia reveals his plan for world domination in the final moments of the film, the world map he uses to display it is not the Mercator projection that had been used in public schools for a century. It didn’t use the newer, Robinson projection that the National Geographic Society would happily have given them for free.

Instead, viewers are greeted with the obscure and rarely-used Azimuthal equidistant projection, which uses the north pole as the center and distorts the southern hemisphere to the point that Antarctica looks like a JPEG artifact. It has neither fidelity of shape nor area, and to the best of my knowledge has only one common use:

It’s the flag of the United Nations.

Screen Shot 2018-12-30 at 8.24.57 PM

UN Flag

With all that said, it’s also undeniable that the Left Behind movies did not have the same pull as that of the books. The first film barely made back its budget, and the sequels didn’t even get a theatrical release. In fact, the production quality was so bad Tim LaHaye sued Cloud Ten Pictures claiming breach of contract. Left Behind 3: World at War came out in 2005, the last Left Behind Novel came out in 2007, and after that the Christian right largely moved on. The google trends data for the series is a slow, petering death.

 

The Dark Ages: 2007-2014

After Left Behind, the Christian Media Industry underwent its own 7 year tribulation. The circumstances of the world around them remained ideal for this conspiratorial mindset. If anything, they were improving. The 2008 recession had taken railing against international banks from the realm of antisemitic claptrap into the mainstream, as Americans watched their tax dollars funneled into wall street bailouts. And the christian right now had a black, democratic president who had spent time in a muslim country in his youth. And indeed, the birther hoax, along with the less common but still widespread belief that Obama himself was the Antichrist, emerged at this time.

Even these environmental shifts, however, could not overwhelm the pressure of demographics. My generation is perhaps the least religious in American history, and as the first millenials entered adulthood, evangelical membership began to plummet.

It wasn’t just a matter of religion, though: the Christian right had built its rhetoric around certain fundamental issues, most notably opposition to gay marriage. That made sense in 2001, when that view was shared by 57% of the electorate, but that majority was dissolving rapidly and with no end in sight. Their issue profile didn’t just not appeal to the future of America, it actively turned them away.

In 2011, 87% of evangelical ministers felt their movement was losing ground. It was the highest number from any church leaders in any country.

The Christian right needed to rebrand, and fast. It needed to completely overhaul its message in a way that spoke the language of Millenials with its emphasis on equal treatment of minority groups, without alienating its traditional congregation who still cared deeply about traditional values issues like abortion and gay rights.

From that effort came the greatest triumph of the entire industry, a film that catapulted it out of irrelevance and into the national spotlight. It has been the elephant in the room since I began this piece. It’s time to discuss God’s Not Dead.

 

The Second Coming: 2014-Present

God’s Not Dead exploded onto the national scene like a bomb. While relatively small by the standards of mainstream Hollywood, its nearly $65 million box office take makes it the most successful film out of the Christian film industry by a factor of two. It earned more in its opening weekend than the gross of all Left Behind movies made up to that point combined. It even came out seven months before a Left Behind remake, the Nicolas Cage vehicle that was meant to bring more professionalism to the series. Its box office take ended up at less than half that of God’s Not Dead, despite opening at more than twice as many theaters. This was the new face of Christian Media.

Especially in comparison to the apocalyptic bombast of their predecessors, the God’s Not Dead series is deeply personal. Each movie is, at its heart, a courtroom drama, depicting a case in which God is, metaphorically, on trial. In the first, it’s a university classroom where a professor is forcing a student to sign a pledge that God is dead. In the second, it’s a literal court case, concerning alleged misconduct by a teacher bringing Jesus into the classroom. In the third… We’ll get to that later.

This innovation makes them vastly superior vehicles to their predecessors. I have never been forced to sign a pledge that God is Dead to pass a college course, nor has any student in American history, but we have all encountered a pretentious professor with a political axe to grind. They appeal to certain universal experiences which are inherently more relatable than The Antichrist.

That is not to say that these movies are good. The first one, for instance, tries to go for a Love, Actually-style story featuring several disconnected narratives that are ultimately united by a common theme, but their choice of narratives to tell dooms it from the start. Love, Actually succeeded because each story was roughly balanced in terms of the stakes involved, and there really was no “main plot”. Neither is true in these movies, and the result is that you spend a lot of your viewing time wondering why you’re watching the travails of these two preachers trying to rent a car to go to Disney World and when you’ll get back to the interesting part in the classroom.

Thematically, however, they are fascinating in ways that none of their predecessors are, because these don’t just show a heroic struggle of good vs. evil or serve as vehicles for a sermon: they show atheists making their case, as the Evangelical Christian movement hears it. It requires a more detailed analysis, so we’re going to discuss them in detail, as individual movies. Well, the first two at least. The third was a box office flop and isn’t even available on PureFlix’s website anymore.

God’s Not Dead (2014):

 

Just to make sure we’re all on the same page, here’s a brief summary: at an unnamed university, freshman and devout Christian Josh Wheaton signs up for a philosophy class led by the militant atheist, Professor Radisson. Radisson demands all his students begin the class by signing a pledge that God is Dead. Josh refuses, Radisson tells him that if he doesn’t sign he will have to prove God exists to the class, and Josh agrees. There are several other subplots, most of which are there to show how horrible non-christians are, including:

  • Amy, a liberal gotcha-journalist with an “I <3 Evolution” bumper sticker on her car, who gets cancer and is dumped by her boyfriend when he finds out (“I think I have cancer”, “this couldn’t wait until tomorrow?”), then converts when she realizes she is alone in the world.
  • A Muslim student named Ayisha who has secretly converted to Christianity thanks to Franklin Graham (son of Christian Film’s progenitor, Billy) sermons on her ipod, who is then disowned by her father when he finds out.
  • Professor Radisson’s abused Christian girlfriend, Mina, and her struggles with their relationship and the care of her senile mother.
  • Pastor Dave, played depressingly straight by one of Pure Flix’s founders, trying and failing to get to Disney World.

After a long debate, Josh persuades the class that God’s not dead, and the characters from all these plots wind up together at a concert by Christian pop-rock group The Newsboys. Radisson gets hit by a car outside, but as he is dying, Pastor Dave persuades him to convert to Christianity, saving his soul. Before the credits, they scroll through a long list of real-world court cases where Christians have allegedly been persecuted in the US, on which this film is based.

There are many cringeworthy moments, from the Good Christian Girl approaching Ayisha as she’s putting her headscarf back on and telling her “you’re beautiful… I wish you didn’t have to do that”, to Pastor Dave’s missionary friend looking to the sky and saying “what happened here tonight is a cause for celebration” while standing over Professor Radisson’s corpse, but those aren’t good for more than sad laughter.

For all its bigotry, and there is plenty of barely-disguised islamophobia in this film, it succeeds because of its small scale. Instead of having a grand conspiracy from the Arabs to control the world, we have a single abusive father who cannot respect his daughter’s choice. Instead of the literal Antichrist, we have one overbearing professor. The adversaries are on a human scale, and therefore feel more believable.

That believability only makes them more pernicious. The underlying assumption, that all non-Christians are evil and all Christians are good, still holds. It’s just harder to dismiss now that these characters are within our experiences of the world as we know it. The muslim father is physically abusive, beating Ayisha and throwing her out of his house when he finds out she’s converted. Amy’s boyfriend literally dumps her because he doesn’t think it’s fair to him that she has cancer (“you’re breaking our deal!”, he exclaims). Professor Radisson’s friends and colleagues are snobbish, pretentious, and demeaning to Mina over everything from her wine handling to her inability to understand Ancient Greek, which, to be fair, is an accurate depiction of most English majors including myself.

And then, of course, there is Radisson. Unlike the other characters, the writers at least try to give him some depth and development. He is an atheist because as a child, he prayed to God to save his dying mother and it didn’t work. The experience left him a bitter, angry shell of a philosopher who, deep down, still believes in the Christian deity. He just despises him. In the climax, Josh berates his professor with the repeated question, “why do you hate God?” and Radisson finally answers “Because he took everything from me!” This prompts Josh to lay down the killing blow: “How can you hate someone if they don’t exist?”

Score one for the Christians.

 

For a movie billed as “putting God on trial”, God’s Not Dead spends very little time engaging with the arguments. Radisson only ever makes two points with his screen time: he argues that Steven Hawking recently said that there is no need for God in the creation of the universe, and he argues that God is immoral because of The Holocaust, Tsunamis, the AIDS crisis, etc. More shocking, however, is how little time gets devoted to Josh’s arguments /for/ God. We see a brief animation about the big bang and some vague discussion of Aristotle and the steady-state cosmological theory, we see the argument that evolution of complex life happened very quickly when compared to how long only single-celled organisms existed, and we see the argument that without moral absolutes provided by God, there can be no morality. All told, it’s only about 20 minutes of screen time.

Each of these arguments is deeply flawed. While Aristotle did believe the universe was eternal, Maimonedes strongly disagreed with his conclusions, as did Thomas Aquinas. It’s unfair to categorize science as being united in support of the theory. Instead, it’s a perfect example of the scientific method at work: a question was asked, it was debated by countless minds until we developed the technology to test it, and we found our answer. Modern-day forms of animal life did emerge only very recently when compared to the history of all life on Earth, but not only did that process still take millions of years, it also reveals a flaw only in the theory of Evolution as put forward by Darwin centuries ago. Since his time, we’ve learned that evolution likely functions in what’s called a punctuated equilibrium, where evolution stagnates until systemic environmental changes force the life web to adapt. Hell, even Darwin wrote “Species of different genera and classes have not changed at the same rate, or in the same degree.” And the way Josh dismisses the entire field of moral philosophy with a wave of his hand is borderline offensive. Suffice it to say, there are plenty of ethical frameworks that have no need for a God whatsoever, many of which have existed for thousands of years. I’m partial to Rawls’s Veil of Ignorance, to name one.

Nor is Josh the only one committing egregious crimes against the good name of reason. Radisson’s Hawking non sequitur is a complete appeal to authority: “Hawking says there is no need for a God, so God doesn’t exist”. It’s a piss-poor argument, and Josh is entirely in the right when he shuts it down a few scenes later. And as offensive as Josh’s dismissal of moral philosophy was to me, I must imagine Radisson’s appeal to injustice would feel much the same to religious viewers. The question of human suffering in a God-given world is called Theodicy, and it’s as old as monotheism itself. Every faith has an answer for it, which is why so many of the devout are converted in times of intense suffering. If you’re unfamiliar with this subject, check out The Book of Job.

While both are flawed, Radisson’s are unquestionably worse. Josh’s arguments are the same kind of nonsense you will find on the christian right from places like PragerU. Josh even cites major evangelical apologists like Lee Strobel in his presentations. His failures reflect real deficiencies in American Evangelical discourse when compared to the breadth of Christian theology, and it’s understandable that a college freshman might not be up to date on Contractualism or the Categorical Imperative.

Radisson’s, by contrast, bear little resemblance to the anti-God arguments you might hear from an atheist professor. He uses some antitheist buzzwords like “Celestial Dictator”, a favorite of the late Christopher Hitchens, but there is no engagement with what Hitchens meant by that phrase. “Celestial Dictatorship” refers to the fact that many traits Christians see in God, that he knows everything, is all powerful, and judges you not just by your actions on Earth but by the intent in your mind to do good or ill, we see as tyrannical despotism in human beings. It’s a nuanced point that you can debate either way. Radisson uses it as a cheap insult. If this were your only experience with atheists, you would think they were all morons.

Some of this is malice, to be sure. It’s impossible to watch this movie and not see the anger the creators feel towards atheists. But there is also apathy here. Once Josh and Radisson agree on the terms of their debate, the professor moves on to assign the class their reading for the week. Two works: Descartes’s “Discourse on the Method”, and Hume’s “The Problems of Induction”.

Hume never wrote a piece with that title. In fact, while he talked at great length about that topic, as I write this, The first result on Google for “The Problems of Induction” tells us that Hume never even used the word “Induction” in his writings. They could have pulled a list off his works off Wikipedia and gone with any title on it and no one would have batted an eye. But they couldn’t be bothered. Instead, they made one up, based on some vague recollection of the topics Hume covered. The movie doesn’t understand the arguments against the existence of God. It doesn’t really understand the arguments for Him. And it doesn’t care to try, because that’s not its purpose.

 

Its purpose is spelled out in the list of lawsuits it shows you during the end credits. Most of them have nothing to do with the existence of God. Most of them are either about the right to discriminate against LGBT people at universities or misconduct from pro-life groups. I’ve taken screenshots of the list, feel free to peruse them.

You should also note that most of these cases were brought forward by a group called “Alliance Defending Freedom“. The Southern Poverty Law Center classifies them as an anti-LGBT hate-group, and their file on the group is a haunting look into the worst of Christian fundamentalism.

The ADF is the legal wing of the far-right Christian movement, and has been instrumental in pushing the narrative that anti-LGBT-discrimination laws impinge on Christians’ First Amendment rights.

That is the underlying message of this film. It has nothing to do with the existence of God. If it did, it would have devoted more time to the debate. It wants you to believe that in this country, Evangelical Christians are a persecuted minority, spat upon by muslims, atheists, and the academy at large.

And it is their fundamental right, granted to them by God and the First Amendment, to discriminate against The Gay Agenda.

God’s Not Dead 2 (2016)

Cinematically, the sequel is in many ways an improvement over the original. It ditches the first film’s “Christ, Actually” style in favor of a single, unified narrative. While the first movie’s cast of characters (with the notable exception of Josh Wheaton) do show up, their storylines are either much shorter or directly related to the core plot.

They have also changed venues, from the metaphorical trial of a college classroom to the literal trial of a courtroom. A christian teacher, played by 90’s-Nickelodeon-sitcom star Melissa Joan Hart, is discussing nonviolent protest and Martin Luther King when a student asks her if that’s similar to the teachings of Jesus Christ, and she answers with a few choice quotes from scripture. This lands her in hot water with the school board, and rather than apologize for her statement, she stands by her beliefs. The school board brings in the ACLU (“they’ve been waiting for a case like this”), and tries to not only fire her, but revoke her teacher’s license as well. The trial soon expands to cover whether Jesus the man was a myth, subpoenas of local preacher’s sermons, even a case of appendicitis. In the end, the judge finds for her, and the Newsboys play us out with a new song.

I want to begin by making something very clear: the Christians are in the right here. While religion has no place in a science classroom, I can’t imagine any way to teach AP US History without mentioning Jesus. Martin Luther King was a reverend, and his biblical inspiration is an established historical truth.

Also, Jesus existed. Even if you throw the entire bible out, both Roman and Jewish scholars refer to him long before Christianity as a religion took off. That doesn’t necessarily mean that he was the Messiah, or even that most of the stories told about him actually happened. But the near-universal consensus of historians and theologians across the globe is that a jewish rabbi named Jesus Christ was born in Nazareth and crucified on the orders of Pontius Pilate.

This film’s problem is not its inaccuracy or ignorance: it is its slander. I actually spoke with a friend of mine who works at the ACLU about a hypothetical case like the one in the movie. He told me they would absolutely weigh in- on the side of the teacher. The American Civil Liberties Union’s official position is that, while proselytizing from teachers is unacceptable, mentions of scripture in context are fair game, as are nearly all religious practices by students, on- or off-campus.

Each God’s Not Dead film ends with a new list of Alliance Defending Freedom cases allegedly depicting infringements on religious freedom. In several of the ones I saw in this film, the ACLU wrote briefs supporting the ADF’s position. You can see them on their website. They’re not just lying about the foremost advocate for Civil Liberties in this country, they’re stabbing their own former allies in the back.

There is a reason why the filmmakers made this particular organization the villain of this story. And it’s not just because the ACLU is the front line of the legal opposition to their pro-discrimination interpretation of the First Amendment. It goes back nearly a hundred years, to another trial over religion in the classroom: The Scopes Trial.

 

In March of 1925, Tennessee passed the Butler Act, a law banning the teaching of Evolution in public schools. The ACLU, then only five years old, offered to defend any teacher arrested for violating the bill. A man named John Scopes took them up on it, and in July of that year was taken to court for teaching Darwin in biology class. The Scopes Monkey Trial, as it came to be known, quickly became a widespread national story, and the town of Dayton (population 1,701) played host to the finest prosecutors, defense lawyers, scientists, religious scholars, and reporters in the world. The proceedings themselves were a madhouse, and to my knowledge are the only time a lawyer for the prosecution has been subpoena’d as an expert witness for the defense. Some day, I’ll write another post on the events of the trial, but until then I recommend you read the play Inherit the Wind  or watch the 1960 film. Both are based on the actual events of the trial.

Scopes was found guilty, and while he won the appeal on a technicality, the Tennessee Supreme Court declined to say the law constituted an establishment of religion. Indeed, it wasn’t until 1968 that the Supreme Court of the United States ruled that such bans violated the establishment clause. But the trial still stands as the beginning of the end of Creationism in public classrooms. Some of that is warranted: the testimony of prosecution counsel, legendary orator, religious scholar, and three-time Democratic presidential candidate William Jennings Bryan was one of the great debates on the nature of skepticism and the historicity of testament of all time. It’s also one that Bryan lost, badly. The legal victory has done nothing to heal the wounds that defeat inflicted on the creationist cause.

That is the underlying purpose of this story: to relitigate the Scopes trial. It even has a climax featuring an unorthodox move by the defense, when the lawyer calls his own client, the teacher, to the stand as a hostile witness. During that testimony, he harangues her into tearfully admitting that she feels she has a personal relationship with God, and in a masterful display of reverse-psychology uses that to appeal to the Jury that they should destroy her.

In Inherit the Wind‘s version of Bryan’s testimony, their renamed version of him eventually claims that God has spoken to him that Darwin’s book is wrong. The defense lawyer mocks him for this answer, declaring him “the prophet Brady!” Same scene, different day.

Their fixation on this trial doesn’t just hurt them by forcing them to slander the ACLU to keep the same opposition in the courtroom: they also overlook a much more interesting legal plotline. During the trial, the school’s lawyers subpoena documents from local preachers, including all their sermons. This actually happened, and whatever your opinion on LGBT protection laws, it should make you feel a little queasy. The pastors had a legitimate case that subpoena power over the pulpit could be abused to prevent the free exercise of religion, even as their opponents had a case that public figure’s speeches advocating for political ends should be admitted to the public record. It’s a complicated issue which sets the First and Fourteenth Amendments in direct conflict with one another, and could make for a truly thrilling drama. In God’s Not Dead 2, it’s just a footnote. Another way to show the ACLU is evil.

Forget that the ACLU actually opposed the sermon subpoenas.

 

There are many other carry-overs from the original film. It is if anything even more hostile to Atheists than its predecessor. One of the subplots follows the child who asked the question about Martin Luther King and Jesus in the first place. She has recently lost a brother, which has driven her to Christ in secret. Her parents are proud skeptics, and are offended by what they see as her teacher proselytizing in class. They are also completely apathetic to the death of their other child. At least Radisson showed some small semblances of caring for his fellow Man.

Once again, the only plotline featuring nonwhite characters also features a physically abusive father. In lieu of Ayisha, they turn to Martin, a Chinese student who had a minor role in the first film who converts to Christianity and is disowned by his father for it. I am more sympathetic to this story than the last, since anti-Christian oppression is a real and well-documented problem in China, but the lack of other nonwhite characters (besides Pastor Dave’s friend, who has done nothing in two movies) makes the choice stick out all the same.

And it has no problem giving a truly heinous interpretation of our Constitution. The defense lawyer, Tom Endler, begins his case by (correctly) pointing out that the phrase “separation of church and state” does not appear in our Constitution. Instead, it came from a letter to the Danbury Baptists, assuring them that the government would not interfere with their right to worship. Endler then claims that this has been twisted in today’s times to mean that all religion must be excluded from the public sphere. This reflects an interpretation of the Establishment clause shared by much of the Christian right called “Accommodationism”.

 

Essentially, the accommodationist perspective argues that the Establishment Clause should be narrowly read to cover only formal establishment of a state religion, not that it should maintain no religious preferences. They argue that The United States was established as a Christian Nation, though not of any one denomination, and that laws which reflect that ought not be struck down. In particular, they see a role for Religion, in general in the public legal sphere: that it “combines an objective, nonarbitrary basis for public morality with respect for the dignity and autonomy of each individual”. And there are some compelling arguments for the theory. Yet it rests on a tortured reading of the Founding Fathers’ discourse before and after the Constitution was ratified.

They were clear, for instance, that the protections of the First Amendment did not only apply to Christians. George Washington, writing to the nation’s first Jewish congregation the year before the First Amendment was ratified:

“It is now no more that toleration is spoken of, as if it were by the indulgence of one class of people, that another enjoyed the exercise of their inherent natural rights. For happily the Government of the United States, which gives to bigotry no sanction, to persecution no assistance requires only that they who live under its protection should demean themselves as good citizens, in giving it on all occasions their effectual support.”

Nor was Washington the only one who extended these protections to Jews. In Thomas Jefferson’s autobiography, he wrote of a proposed addition of “Jesus Christ” to the preamble of his Virginia Statute of Religious Freedom, a predecessor to the First Amendment. It was defeated soundly, and the bill was passed without the reference. Jefferson writes: “they meant to comprehend, within the mantle of it’s protection, the Jew and the Gentile, the Christian and Mahometan, the Hindoo and infidel of every denomination”.

More specifically, on the subject of Jewish schoolchildren being taught the King James Bible, he wrote:”I have thought it a cruel addition to the wrongs which that injured sect have suffered that their youths should be excluded from the instructions in science afforded to all others in our public seminaries by imposing on them a course of theological reading which their consciences do not permit them to pursue”.

Clearly a man who felt that biblical teachings like those mandated in the Scopes Trial were allowed under the Establishment Clause.

As for the argument that Religion provides a nonarbitrary basis for public morality, that same Virginia Statute: “That our civil rights have no dependence on our religious opinions any more than our opinions in physics or geometry”.

That’s not to say that Accommodationism is a wholly bankrupt ideology. It’s all well and good to say that our Government cannot prefer one religion over another, but there are plenty of cases in which a full commitment to the Establishment Cause would constitute a de-facto violation of Free Exercise. But the version espoused by the creators of God’s Not Dead 2 is not interested in those grey areas. They believe that the United States was established first and foremost as a Christian Nation, and that our laws should reflect that supposed truth.

Or, in their words: “Unfortunately, in this day and age, people seem to forget that the most basic human right of all is the right to know Jesus.”

Where We Stand Today

God’s Not Dead: A Light in Darkness, the third installment of the series, aired in March of last year. It couldn’t even make back its budget, grossing less overall than either of its predecessors did on their opening weekends. Yet the Christian Film Industry lives on, with hundreds of movies available on PureFlix and more coming out every year.

Don’t mistake this for market domination. Even within conservative Christian circles, total interest in these films has never managed to eclipse that of Rush Limbaugh alone. The conservative media sphere is vast, and even God’s Not Dead was dwarfed by players like Fox News.

But that doesn’t mean it is meaningless. Even the third film of the series reached more people than the average readership of the National Review, long seen as the premier intellectual forum of the conservative movement. And unlike Fox News, these movies function largely invisibly: you won’t find them on Netflix, but to the target demographic, they’re sorted by subcategory (“apocalypse” and “patriotic”, to name a few) on its Christian doppelgänger. Their message is being heard loud and clear.

And that message is toxic. The world on display in God’s Not Dead, Left Behind, and all their knock-offs and contemporaries is one unrecognizable to those of us on the outside. It is a world divided into Christians and evildoers, where nonbelievers are at best abusive fathers and at worst servants of the Antichrist. It is a world in which the majority religion of the United States is subject to increasing victimization and oppression on the part of an Atheist elite hellbent on destroying God. It is one where powerful institutions for the public good, the ACLU, the UN, even our high schools and universities have been bent to their will. It is one that tells you that your identity is under attack, and that you must act to defend it.

Whatever they may have been in 1951, today these movies are not sermons on God. They are political propaganda. They are a carefully-crafted cocktail of Christian trappings and conspiracy theories, designed to make its viewers see Muslims as hateful child-beaters, Atheists as amoral oppressors, and the fundamental tenets of liberalism and pluralism as attacks on their faith.

It doesn’t have to be this way. The same year that gave us God’s Not Dead gave us Noah, a biblical story that grossed six times as much and got far better reviews. And even in these films, there are moments of real honesty and insight. In the first Left Behind film, there’s a scene in a church where everyone in the congregation has been raptured, except for the Preacher. We get to watch him, alone with the symbols of his God, processing his loss and what that meant about his faith. It’s a little over-the-top, but it also forced me to put my phone down and think about the ethics of what I was watching. You wind up pondering consequentialism and virtue ethics, wondering why his role in his congregation’s salvation doesn’t count, whether it should if his motives were corrupt, all while watching real grief from a man wrestling with his faith.

Maybe I’m just a sucker for climactic scenes where men call out God in the middle of empty churches. But I think there’s more to it than that. There are good moments, lots of them, in these films. If they wanted to, these directors and actors and producers could force their audiences to confront their own assumptions, to strengthen their faith through genuine interrogation. They could give us a catechism for the modern, more uncertain age.

They can. They just choose not to.

Polls are Tricky Business

The odds are good that every hot take on political polls you’ve ever read is wrong.

That’s not to say that the various writers of opinion columns everywhere are lying to you. They just don’t hold advanced degrees in statistics, so they all make the same mistakes. They also usually haven’t spent much time, if any, working at a polling outfit or some equivalent. I understand where they’re coming from.

But the misconceptions they have get carried over into the greater world. That’s what I want to correct.

1. Polling Numbers Aren’t Numbers

This is perhaps the most important, and most frequently forgotten about one: polls don’t give you numbers, they give you ranges.

In any survey, there is an inherent statistical uncertainty because you haven’t given a questionnaire to every voter in the U.S. We handle that with a margin of error: when Gallup reports that Trump has a 41% approval rating, what they really mean is “Trump’s approval rating is about 41%, give or take a bit”. You can calculate that bit from the size of the sample. Most polls aim for at least 1,000 respondents and about a 3% margin of error.

This is important, because we tend to only care about who wins or loses a race. Take, for instance, I gave you two polls before the 2016 election: one predicted that Hillary would win by about a point, and the other predicted that Trump would win in a landslide. Most people, including most reporters, would instinctively say that the latter was more accurate: it successfully predicted Trump’s win. But any pollster will tell you that instinct is wrong: the former isn’t just more accurate, it’s a lot more. After all, it got the actual result within its margin of error, and the Trump-win poll was way off the deep end.

2. Polling Numbers Aren’t the Same as What the Respondents Said

This one is a bit less well known. Let’s say I have a poll of 1,000 people, and it reports that Trump has a 41% approval rating. That means 41% of the people in the sample supported him, so 410 people said “yes, I approve of Donald Trump’s performance”, right?

Wrong! You see, for a poll to work, you need to have an accurate sample to the population, which means an accurate sample of the population needs to pick up their phones and answer the questions. But most people don’t answer their phones when a random number calls. As of 2017, the average response rate for a phone survey is less than 10%.

It gets worse. There are two different ways you can run a poll: you can have a computer call random numbers and record responses through number presses, or you can hire an entire phone bank of people to call random numbers all day. One of those is significantly cheaper than the other, so now most polls are automated. But automated call systems are prohibited by law from calling cell phones. If you don’t have a land line, you will never hear from an automated polling outfit, and it’s impossible for you to be in the sample.

The population of people with land lines does not look that much like the general U.S. population. They skew older, whiter, wealthier, more conservative, and more rural. If we just left the sample as is, every poll would be like that Trump victory one from the last paragraph.

Pollsters correct for that by weighting the sample: some people’s responses count for more than others, depending on how much of the electorate their particular demographic blend is likely to be and how many people like them are in the sample. This also introduces the potential for human error: you don’t know how many millenials are going to be part of the electorate next year, and neither do Pew and Gallup. They take an educated guess, and if their guess is wrong, their poll will be too.

The good news is, they’ve gotten very good at guessing. The bad news is, the combination of low response rates and spotty coverage, particularly of blacks, latinos, and the young, means that the people you find in those categories can count for so much that random noise throws off the sample set. In 2016, one 19-year-old black man from Illinois who was all-in on Trump was so heavily weighted he threw off the LA Times tracking poll to the point that they predicted Trump would win by 5 points.

And if you’re thinking that they called it right and the other polls were wrong because Trump DID win, remember that Hillary won the popular vote by 2 points. The tracking poll was trying to score the national popular vote, not the electoral college margin, and it was off by more than all the polls that “got it wrong”.

3. Never, EVER trust the crosstabs

If you’re reading an op-ed, and they quote a poll as saying that “Hillary’s support among blacks is comparatively lower than Obama’s”, or something similar, feel free to stop reading.

When polling outfits release their results, they don’t just give you the top line numbers. They also break down the results by race, gender, education, political party, etc. These are called “crosstabs”, and they generate them exactly how you’d expect: they slice up the sample to include only the subset that shares that trait, and look at the results for just that group.

The problem is that these crosstabs are a lot smaller than the actual sample. Even the largest one, men vs. women, will only be about half the size. That means a much bigger margin of error. It’s not just twice. That shit ain’t linear. And when you’re talking about even smaller groups like Republicans, men without a college degree, or black women ages 18-29, it gets even worse.

Combine that with the hefty overweighting of certain populations because they’re less likely to be reached by the pollsters, and as much as 15% of that crosstab analysis could come from one person’s opinion.

The only exception to this, and I mean the only one, is if the same crosstab analysis holds across large numbers of polls for a long time. We know, for instance, that Trump’s approval rating is still very high among republicans, because even with as much as a 15-20% margin of error per poll, enough have said he does that we can be sure of it. But small fluctuations in the crosstabs, between different polls, in a short amount of time? It’s meaningless. It gives the pundits something to agonize over between Trump tweets.

4. National Polls are National

Despite what the entire news media would have you believe, the polls did moderately well in 2016. According to Fivethirtyeight, on average they predicted Hillary would win by about 3.5 points. In fact, she won by about 2.1. An error of 1.4 is high, and when it’s across a polling average it’s indicative of a systemic problem, but it’s well within the acceptable margins. You wouldn’t, for instance, be angry if she won by 6 points and the polls predicted she’d win by 7.4.

The problem is that what the polls were measuring (the national popular vote) and what we all really cared about (the vote margins in the key swing states to put each candidate over the edge) weren’t the same thing. They were correlated, in the sense that in most presidential elections, the candidate who wins the popular vote also wins the electoral college, but they aren’t identical. They can go in different directions.

That, of course, is what happened in 2016. Clinton won a close but decisive victory in the popular vote by racking up huge margins in blue states and high turnout in deep-red states like Texas, while Trump won the exact right mix in the right places to eke out a win in the electoral college. The polls were largely right. But we were wrong in our interpretations of them.

In other words, be very careful to understand what question each poll is actually answering. It may not be the same as the one you’re asking.

Fin

As a general rule, you probably put too much faith in the exact numbers a poll reports to you. Polls don’t lie (usually), but the people interpreting them for you can, and you have to always be aware of the inherent uncertainty in any survey. Pollsters stake their livelihoods on the quality of their predictions. They are very, very good at what they do. But we can make their job easier by learning how to read what they produce.

Megarachne

Forty years ago, scientists revealed a fossil of the largest spider that had ever lived. While never reaching the fame of a T-Rex, the foot-long arachnid became a mainstay of natural history museums around the world, unparalleled in its ability to creep out and disturb children and parents alike.

Until the moment it wasn’t.

Decades later, we were told that everything we thought we knew about this animal was incorrect. For twenty-five years, the museums, the documentaries, and the general paleontological community all got it wrong. How did they mistake the creature the first time, and why didn’t anyone notice until 2005?

It all begins in 1980. An Argentinian paleontologist named Mario Hünicken quietly announced an extraordinary discovery. He had found a fossil in the sediment near Baja de Feliz which dated back to the late Carboniferous, 300 million years ago. It was a foot long, and appeared to show most of the animal’s body and three of its legs.

It looked like this.

Megarachne

He named it “Megarachne”, or “Giant Spider”. It did the term justice. Based on this fossil, Megarachne was an ancient ancestor of modern-day tarantulas. With a length of 13 inches and a legspan of over 20, this specimen dwarfed even the largest spiders today, the Giant Huntsman and the Goliath Bird-Eater.

Using a technique called X-ray microtomography, essentially using X-rays to build a 3-D model of the fossil and reveal otherwise invisible details, Hünicken began to learn more about Megarachne’s biology. There were two visible eye sockets, but also a central protrusion that could be space for more. It had an extensive set of chelicerae (essentially, spider’s jaws) at the front, which we see as a bulbous protrusion. They were uncommonly wide and developed for a spider, and may have been large enough to have substantial pushing power on their own, giving the spider more ability to maneuver even large prey to its mouth.

While unimaginable today, a tarantula of this size would not have been that out-of-place in the Carboniferous Earth. The atmosphere was far more oxygen-rich, which meant that arthropods and insects could grow far larger than they can today. This was an era with dragonflies the size of pigeons and seven-foot millipedes. Megarachne would not have wanted for prey.

It’s unlikely this creature spun webs like the Golden Orb-Weaver. Instead, it would have built funnel-like nests from its webbing and waited for a passing lizard or amphibian or large insect to pass by. Lunging out, it would have used those scoop-like chelicerae to grab the animal, then inject it with paralyzing venom. Or, perhaps even more horrifying, it might have wandered the floors of the lush rainforests which covered the world at the time, stalking its prey like a tiger. If it did, it would have to be careful: even this far into antiquity, the largest predators of the time could be as large as your average bear.

Unfortunately, it was hard to tell too much of this for sure. Megarachne was only known from this one fossil, and while several casts had been made for further study, the original, complete with the hidden details Hünicken uncovered, was essentially lost: sold to an anonymous collector and locked in a vault somewhere. It wouldn’t resurface for another 25 years.

As a result, a great deal of what we knew about this animal was conjecture, based on tiny details only Hünicken and a few others had seen. He wasn’t lying, to be clear: casts of fossils can only show the imprint of the object, which obscures subtle details within the structure.

Hünicken’s microtomography had revealed what appeared to be signature spider traits within the partial fossil. Cheliceral fangs which could potentially deliver venom, even a sternum (the underside of the abdomen, when used in arthropod anatomy). They weren’t as well preserved as the rest, so you had to extrapolate a bit, but it looked like they were there. And besides, the shape of the animal was clearly that of a proto-tarantula.

And yet, there were some doubters. Dedicated arachnologists pointed out a number of inconsistencies between Megarachne and other spiders, most notably the suture between the abdomen (back end) and cephalothorax (front end). Sutures are basically immobile joints, and in humans only exist in our skull. It meant that this spider wouldn’t be able to bend its body the way tarantulas today do, like this:

spider bend

You can even see the suture for yourself in the cast at the top of this post: it’s directly above the curved line around the abdomen. There’s a white spot in the middle. It’s more visible in the original fossil, but you can make it out if you look closely.

Even Hünicken himself acknowledged the discrepancy, along with a few others. But they were easily explained away. 300 million years is a long time, after all.

Meanwhile, Megarachne was busy going as viral as a Carboniferous-era arthropod could go. We’ve always had a weakness for giant spiders, and here was a genuine monster. This thing had more than its fair share of museum displays, an unusual trait for its era. Before the dinosaurs, but after the absolute freak show of the earliest animals, the Carboniferous had one thing going for it in terms of public appeal: the giant dragonfly. In fact, much of the time before the dinosaurs gets skipped over when discussing the extraordinary variety of life on Earth, but I digress. The point is, by the 1990s the museums were in the pocket of Big Megarachne.

 

Let’s recap. The creature we’re talking about would have looked something like this:

mn5azc6dxryx

That image is to scale, of course. It doesn’t look like a modern-day tarantula. The body is fused into one big chunk, the suture at work. You can also see the spatulate chelicerae: they’re the two giant growths below its eyes. The smaller limbs between them and the legs are called pedipalps. They’re used to help maneuver prey while eating, and also as tongues and noses. And also penises sometimes. Spiders are weird.

And for about 25 years, that was how it was. Megarachne was a bizarre ancestral spider, made gigantic by an oxygen-rich atmosphere and sporting a set of fangs the size of lightbulbs. A few spider-specialists in the community grumbled that it might have been a Ricinuleid or Solifuge, but for the most part, it was accepted.

And then everything changed.

In 2004, another fossil was found in the same rock formation. It was unquestionably Megarachne: one telltale feature was the identical, and unusual, eye formation. It also looked a lot less like a spider. It’s the middle one in this picture:

rsbl20040272f01

By February of the next year, a new team of Paleontologists, advised by Hünicken himself, published a new paper: Megarachne was not a spider. It was a eurypterid.

The Eurypterids have long since vanished from the Earth, so we don’t have any experience with them like we do for spiders. For that, we should be eternally grateful. Also called “sea scorpions”, Eurypterids dominated the seas from about 450 to 300 million years ago, and lasted for a long time after that. They were filter feeders and apex predators, and ranged from a few inches long to the size of an American Alligator. They would have looked something like this:

pterygotus 2

In a single paper, Megarachne lost all its mojo. Not only was it no longer a record-holding behemoth, it was fairly small for its order. It was also not a developed ambush predator: it fed itself by swimming through riverbeds, using its many arms to capture the tiny invertebrates that lived in the mud and silt.

The scientific community didn’t question the results. The evidence was blatantly obvious, and Hünicken himself had co-authored the paper. Indeed, the speed with which the consensus on the animal changed is an example of science’s greatest quality: the ability to recognize when it is wrong and self-correct.

Yet the legacy of their mistake lives on to this day. Species names are hard to change once they’ve been assigned, so Megarachne retained its name. There is now a bottom-feeding eurypterid, not unlike a lobster, whose name directly translates to “giant spider”. But this paper also came at an inopportune moment for a much larger entity: the British Broadcasting Company. Some years before, the BBC had aired the hugely successful “Walking With Dinosaurs”, a high-budget docuseries narrated by Kenneth Branagh that won three Emmys. It in turn spawned a sequel, “Walking with Beasts”, that catalogued the time between the Cretacious extinction and today, and a prequel, “Walking With Monsters”, which would air later that year.

“Walking With Monsters” is a masterpiece of the genre, and I encourage anyone interested to watch it. Like its predecessors, it’s rivaled only by big-budget action movies in the quality of its special effects, but with a degree of accuracy unparalleled by any cinema. The producers consulted 600 paleontologists, paleobotanists, geologists, and even astronomers to ensure that its depiction of a billion-year story was scrupulously accurate to the scientific consensus. They devoted segments to every era of life’s history before the dinosaurs, and with each one showed not only the path of evolution in our earliest ancestors but also the signature creatures of each epoch. And when it came time to pick the signature animal for the Carboniferous, there was only one natural choice: the largest spider that ever lived.

Seldon, Corronca, and Hünicker published their paper several months before “Walking with Beasts” aired, but it was still too late to change it. Megarachne was the star of the entire segment, there was no way to easily cut it out. Nor could it be replaced. They could find a new animal, but that would mean consulting all the experts again, even more money for the special effects budget, a new script for an entirely different animal, hauling Kenneth back into the soundstage, and countless other barriers. It was either cut a hundred million years from the story of life on Earth, or bite the bullet and air the episode.

The BBC opted for the latter. It is, to my knowledge, the only time it has ever knowingly and intentionally aired fake news.

And so, despite being left behind by science, Megarachne lives on, not in the literature or the museum exhibitions, but in the minds of a generation of impressionable science nerds who saw it fight for survival on the television.

I am not ashamed to say that I am one of them.