When English Teachers try to Science

It’s not often that I get mad at an article.

I can get angry at the content. In this day and age, it’s hard not to. I might even get outraged at the arguments they’re making. But I’m not angry at the article itself. I’m mad at what it’s saying, which is similar but not quite the same.

Today, I want to talk about one that left me furious.

Two weeks ago, I was browsing Quillette magazine, the epicenter of the Intellectual Dark Web. For the uninitiated, the I.D.W. is a haven for free thinkers, academics, and intellectuals who feel drowned out or left behind by the stifling PC discourse of our universities. This leads to some… uneven content. Its best is extraordinary. Its worst is abysmal.

When I came across Myles Weber’s article “When a Question of Science Brooks no Dissent”, I thought it was one of the good ones. Quillette writers have done amazing work critiquing the way we approach science today, and how it’s enabled things like the Replication Crisis. I was excited to read this. I was hyped.

Within two paragraphs, that hype turned to bitter disappointment, as I realized I was actually reading a perfect example of the Intellectual Dark Web at its worst.

Before I go over this piece, I encourage you to read it yourself so you know I’m not misrepresenting it. It looks long when you click it, but most of the page is comments.

Speaking Power to Truth

The thesis of Myles Weber’s piece is that there is an unjustifiable degree of climate alarmism in academia and on the left in general. He argues that we have shirked our duty to question the supposed consensus on the impact of global warming, and that many people who should know better have absolutely no idea how any of the science works. He thinks that his fellow professors “forget what our job is: Not to tell the students what to think, but rather to teach them how to think for themselves.”

That’s a serious accusation, especially since he levels it at earth-science professors and his department’s “self-proclaimed expert on climate matters”. And he describes some astounding anecdotes of the scientific illiteracy of his peers. He argues that their devotion to global warming alarmism shows a lack of intellectual curiosity. He argues that his fellow professors are passing that on to our students.

I can’t deny that educated professors should know the basics of how a greenhouse works, or that Minnesota isn’t in danger of an imminent glacial flood. But it’s worth looking at the examples he uses of the questions they should be asking. What does his idea of intellectual curiosity look like?

It turns out his idea looks nothing like actual scrutiny and a lot like tired conservative talking points. Every semester, he gives his students a series of questions on the climate, to demonstrate how much they need to learn to understand the issue of climate change. One example he gives is this: “Which greenhouse gas accounts for more of the tropospheric greenhouse effect than all the other greenhouse gases combined?”. The correct answer is “water vapor”.

This question seems innocuous, but it’s alluding to a common argument from conservative climate skeptics.  It’s been debunked countless times, and we’ll go over it in more detail later, but it’s not good-natured skepticism. It’s propaganda.

Then there is this passage, where he argues that glacier melt is actually a good thing: “Under such conditions, rivers that swell every spring from snowpack melt would stay swollen into late summer from glacial melt. This is almost always a good thing while it lasts since the extra water helps people downstream irrigate their crops. (Moisture trapped in a mountain glacier is useless when it is not downright destructive.) This is yet one more reason why a warming climate is preferable to a cooling one. “

I’ve never seen the glacier variant before, but the argument that Global Warming is either a good thing or at least not bad is as common as the water-vapor myth, and is held even by the higher-ups in our current government.

Finally, there’s his choice of scientific issue to challenge. 97% of climate scientists agree with the scientific consensus on global warming, roughly as many as believe in evolution, and significantly more than believe vaccines are entirely safe. But he harangues his colleagues over the only one of those topics explicitly mentioned in the Republican party platform.

None of this should be surprising. Weber begins his piece, not by talking about scientific illiteracy, but by slamming Barack Obama for politicizing the Sandy Hook Massacre in 2012. Apparently, only 6 days after the atrocity, he was calling on foreign diplomats to honor the dead children by fighting Global Warming.

This non sequitur betrays his political agenda. A dispassionate skeptic would not spend nearly 500 words attacking a former president for politicizing a tragedy six years ago, in an article about academia. More than that, the vignette sounded off to me. Barack Obama was not a perfect president, but this did not sound like the man who broke down in tears in his response to the massacre.

I found a transcript of the remarks Weber was talking about. I encourage you to read them for yourself. They are eloquent, and insightful, a sermon on the fundamental experiences that we all share, on the capacity of tragedy to bring out the best in people, and how important it is that we remember that unifying force as we face the new, global challenges of the 21st century. I was moved.

He also barely mentioned global warming. It was only one of several examples of what we must unify to face, an afterthought in his remarks. Despite quoting extensively from the speech, nothing Weber said about it was true. He even got the date wrong. The Sandy Hook massacre happened on December 14th, 2012. He said Obama gave the remarks six days later, on the 20th. But according to the presidential archives, he gave that speech on the 19th. I’m not sure how he missed it, but it was probably an honest mistake.

In an interesting coincidence, the conservative magazine “The Washington Examiner” ran an article about the speech that implied Obama used it to make Sandy Hook about global warming, and that article came out on the 20th.

That’s Not How Science Works

The fact this piece is naively parroting Republican talking points doesn’t disprove its core thesis that we shouldn’t “brainlessly push climate-change alarmism.” Having a conservative agenda does not mean you’re wrong.

Being wrong does. Weber’s article taking his colleagues to task for their scientific illiteracy makes grievous errors every time he turns to scientific topics.

Take the water vapor question I mentioned earlier. The implication is that, because water vapor is more important in shaping our climate, that we don’t have that much to worry about from a small increase in carbon dioxide.

There are three issues with this line of reasoning. First, if there were no greenhouse effect at all, the earth’s average temperature would be about -18 degrees celsius, 33 degrees colder than it is today. Carbon dioxide and methane may only account for ten of those 33 degrees, but it turns out you don’t need to make the earth colder than summer on Mars to royally fuck up human civilization. Upping the carbon dioxide is enough on its own.

Second, more carbon dioxide in the atmosphere leads to more water vapor too. As carbon dioxide traps heat in our atmosphere and warms the planet, the oceans warm up, too. As they warm up, more water evaporates and becomes water vapor in the atmosphere. And as Weber presumably learned in the fourth-grade science class he assures us he didn’t sleep through, hot air holds more vapor than cold air.

Third, there is a special, highly technical process in climatology that regulates the amount of water vapor in the atmosphere and prevents it from causing catastrophic problems like carbon dioxide does, at least on its own. It can amplify already-existing warming trends, but it could never cause them, because this obscure process prevents it from building up in the atmosphere for long.

It’s called “rain”.

I wish this were the only time he mangled basic science to prove a point. But he seems incapable of getting anything right once he starts talking details. A few of his other errors include:

He confidently explains how increased glacial melt is a good thing, because it leads to a longer swell season downriver and helps people with their crops. Apparently, he hasn’t given any thought to what happens when there isn’t any more glacier to melt. Once it’s gone, there’s no more swell season downriver because there is no downriver: the water dries up as the glaciers disappear. Forget the flood danger, those farmers will lose their water supply.

In his conclusion, he describes a time he asked a climatologist to describe a foolproof experiment to prove humans cause global warming, if money weren’t an issue. The climatologist can’t think of one. When pressed for an example, Weber says such an experiment could involve the Antarctic ice shelf. He explains that West Antarctica and the Peninsula should be warming more slowly, since it’s surrounded by moderating ocean and has more water vapor, and East Antarctica should be warming the fastest, since it’s far from the oceans and cold enough to have little vapor. Since that’s not what’s happening, global warming seems dubious.

This is the opposite of the truth. We’ve known East Antarctica was more stable than the rest of the continent for decades, because of how the ice sheet interacts with bedrock. The bedrock is higher in the east, which means it’s harder for water to get underneath the ice sheet and accelerate the melting process. No such luck in West Antarctica. He got the expected outcome backwards.

Early on, he mocks his colleague for believing their Minnesota town is threatened by climate-change related floods, due to glacial melt. While he’s correct that glacial melt isn’t the problem, global warming actually does mean greater flood risk in the Midwest. Warmer ocean temperatures lead to larger storms and longer storm seasons, creating both more snowmelt in winter and more water on top of that in spring. This has caused record floods along the Mississippi and its tributaries in 2011, 2014, 2016, and 2018.

It’s also why there’s an active flood warning in his county right now.

Screen Shot 2019-04-05 at 5.44.38 PM

Weber not only betrays a lack of understanding of science and the impacts of global warming, he also displays little knowledge of the scientific method. It’s telling that, when asked to give an example of a hypothetical experiment to prove or disprove global warming, he gives one data point that is still subject to environmental factors. He seems fuzzy on the difference between “experiment” and “argument”.

Perhaps I should cut Myles Weber some slack. After all, he’s not a climatologist. He’s not a meteorologist. He’s not a scientist at all. He’s an English professor. His scientifically illiterate colleagues are English professors. I doubt any of them have taken a science class since the Reagan administration.

What sets him apart from his colleagues is another philosophical virtue: intellectual modesty. His colleagues know that they don’t know shit about climate science, so they blindly trust the countless researchers who do. Myles Weber, however, believes that he is more qualified to discuss the topic. He believes his skeptic’s mind gives him all the tools he needs to evaluate climate science, despite not referencing any scientific studies in his 3,000 word thinkpiece.

It’s unfortunate, because anyone who’s taught an undergraduate English course ought to know the dangers of confidently arguing something when all you’ve read about it is online summaries.

Play Stupid Games, Win Stupid Prizes

There is nothing remarkable about some dude writing an inane hot take about global warming. What struck me about this piece wasn’t the ignorance or the lack of self-awareness, but how petty it was about it.

It’s one thing to hypocritically complain about your colleagues in private. It’s another to do it publicly, on a major website at the heart of a political movement. He publicly shames his own coworkers for not remembering how greenhouses work, accuses them of being dumber than 10-year-olds, and does it all with the absolute certainty that his tragic misunderstanding of the sciences is correct.

In the words of one of the 21st century’s great philosophers, “Don’t be clowning a clown, you’ll wind up with a frown.” If Myles Weber wants our political discourse to be more petty, I am happy to oblige.

I’ve mentioned his strange choice of intro topic, a six-year-old speech by Barack Obama that had nothing to do with global warming, before. But it’s even worse when you realize he’s an English professor who ought to know better. The introductory paragraph of an essay is supposed to pull the reader in, explain your thesis, and provide a road map for how you’re supporting that thesis. Two paragraphs in, I’m not curious, just confused. I don’t know what he’s arguing, or how he’s gonna support it.

In his second paragraph, he says the punishment for treason is “if I’m not mistaken, death by firing squad.” He’s mistaken. US law doesn’t specify a method of execution for each crime, and most states use lethal injection. Only three people have been executed by firing squad since 1960, all of them in Utah. So you can add the US penal code to the growing list of topics this man knows nothing about but has opinions on anyways.

One more stylistic note: he has a “here’s my point” sentence, which is lazy writing on its own. Even worse, it’s in the last section of his article, only 3 paragraphs from the end. You shouldn’t have to tell your audience what your point is 80% of the way through your essay. If they haven’t figured it out on their own by then, you have bigger problems.

Perhaps the most poignant section of Weber’s piece is a vignette about a time he had dinner with an academic acquaintance. Weber turns the conversation to the scientific illiteracy of his colleagues, and the acquaintance makes a dismissive statement and changes the subject. Weber calls him on the fallacy, and rather than engage in debate, the acquaintance moves on and doesn’t call him again.

You can feel the discomfort of his poor dinner date in that passage. Here is someone who probably just wants to network, stuck in a restaurant with a man who won’t stop ranting about global warming and complaining about how stupid his colleagues are. He tries to change the subject, but the tenured professor won’t let him.

Earlier, Weber refers to the colleague who forgot how greenhouses work as “our department’s self-appointed expert on climate matters”. From what he’s said of how he talks to his students and coworkers, I am 99% certain that the only self-appointed expert in his department is him.

Finally, I want to talk about the stylistic choice that makes Myles Weber’s piece the peak of pretentiousness in academia: the way he spells “academia”.

Or rather, doesn’t: he uses “academe” instead, a word I’d never seen before. At first, I thought, “oh, he’s probably using the the pretentious original Latin term to be technically correct”. But I was mistaken. “Academia” isn’t just correct English, it’s correct Latin. So where did “academe” come from?

Well, it /is/ technically a word, but not a common one. It’s a synonym of “academia”, and doesn’t add any nuance or specificity beyond that. There’s no reason to use it, unless you want to show off how many big words you know.

It’s also synonymous with “Pedant”.

Halfway Around the World

As fun as it is to mock morons for believing stupid things, this article and its many flaws should be sobering to all of us. Mark Twain once said “A lie can travel halfway around the world before the truth can get its boots on”, and never has that been more true than today’s massively online age. This misinformed piece has nearly 400 comments. It’s been read by thousands, even tens of thousands, of people. It came out, they read it, internalized all the nonsense it spewed, and moved on, all in just a few days.

And it took me two weeks to research and finish this response.

Today, two weeks is an eternity. Two weeks is longer than the lifespan of most memes. Reddit threads and Facebook posts disappear from your news feed in a matter of hours. By the time other thinkers can prepare their critiques, the misinformation has already come and gone. It’s old news, accepted into the general narrative, and attempts to correct it come across as necroing old threads that aren’t relevant anymore.

But they are relevant. Just because hard research moves at a comparatively glacial pace doesn’t mean it’s any less crucial today. Misinformation thrives on our impatience. It’s how, six years later, a tenured professor still believes that Barack Obama politicized Sandy Hook to push climate action, even though he never did. It’s how this article manages to change minds and move the conversation even though none of it is true. It’s how I got away with using that quote about lies traveling halfway around the world, even though Mark Twain didn’t actually say that.

In the end, Myles Weber and I agree. Healthy skepticism is a good thing, and something isn’t true just because a bunch of experts in white coats say it is.  As long as the internet is free, there will be morons out there using it to peddle pseudoscientific dogma, and when fact-checkers can’t keep up, we have an obligation to watch out for them on our own.

As a rule of thumb, they’re usually the people urging you not to believe the experts.

1 in 5: a VERY deep dive into campus sexual assault statistics

1 in 5 women will be sexually assaulted at some point during their time in college. It’s a shocking number, one that’s led to a lot of agonizing and discourse across the political spectrum and a variety of reforms put in place on campus. As it should. There is no society in which a statistic like that should be acceptable.

It’s also led to a lot of scrutiny from people who do not want to believe that sexual assault is such a problem in our universities. These people, mostly conservatives, point to a wide variety of perceived flaws in the original study to discredit its findings. They point to other studies with different methodologies that contradict the number. They accuse the authors of fudging the data to promote a political agenda. Debunking this study is a minor pasttime in the right-wing media bubble, like shuffleboard or badminton. But do their critiques hold water? What’s the truth buried in the data?

Before we begin, two warnings: I’m not going to be double-checking their regression analyses here, but there’s no way to talk about this without covering at least a little math. So if you’re one of these people who can’t handle numbers, now would be a good time to leave. More importantly though, I’m gonna be touching on some heavy shit here. There won’t be any graphic descriptions or stories. This is all numbers. But if that isn’t your thing, don’t feel bad noping out of this one.

1. The Survey

Generally speaking, when people cite “1 in 5”, they’re referring to this study by the department of justice. There are a lot of others that reach basically the same results, but all the ones I’ve seen use essentially the same methodology and weighting, and find similar results so I’m gonna focus on it.

Basically, they took two unnamed large Universities, one in the south and one in the midwest, and emailed every student there the same survey asking about their history with sexual assault. They broke it down between forced (i.e. violent) and incapacitated (i.e. drunk) sexual assault, while excluding suspected but not confirmed accounts in the latter category. So already, there’s one way the numbers could be HIGHER than currently reported: not every victim is gonna be sure about what happened. They also looked at trends in attempted vs. completed, and a number of other things.

After some weighting, they found that 19% of women reported experiencing an attempted or completed sexual assault during their time in college: 12.8% for attempted, 13.7% for completed. If you read YouTube comments (and you shouldn’t), you’ll see people use those numbers to argue that the study is somehow fraudulent: 12.8+13.7=26.5, not 19.0. Because apparently you can’t experience both. This is another way that it understates the total rate of sexual assault at universities, though it wouldn’t change the top line number: they only ask if someone has experienced these things, not how often. This is common across most of these surveys.

There are other interesting findings in the data, some more surprising than others. It’s not uniformly distributed through time: there’s a distinct “rape season”, roughly corresponding with fall semester. It peaks in September-October. More than half of all sexual assaults are committed on Friday or Saturday, which makes sense since the most common location for it is at a party. All of those are more pronounced for incapacitated sexual assault than forced, by the way.

The highest reported percentage is among seniors. There’s a credible argument that you should only be looking at them, because counting freshman in prevalence rates across the entirety of the college experience seems dumb, but there’s a real risk of people forgetting about incidents earlier in their studies, or becoming less willing to count it as the victimization fades. Freshman and sophomores are the most likely to be experience this, so it’s important to include them. And before you say “who the fuck forgets being raped”, only a QUARTER of incapacitated sexual assault victims classified their experience as such in the survey.

That’s roughly what it covers. I’m going to move on to the flaws and tradeoffs in the study in a moment, but first I want to point out something that really bothers me. You might heard some variation of “1 in 5 women and 1 in 27 men” in one of these articles or consent workshops. That’s not what the study finds. They found that 6.1% or roughly 1 in 16 men had been a victim of sexual assault. I’m not sure where the 1 in 27 number comes from, but it’s exactly what would happen if you used this study as a source, then only counted completed sexual assaults for men and both attempted and completed assaults for women. If anybody knows better, please send me sources because I want to still have faith in humanity.

2. Shortcomings in the Dataset

While this study is good, it’s not perfect. There are several real issues with how it handles the numbers, and where it draws them from, that should be concerning to anyone relying on them. That’s not to say it’s bullshit: these flaws are natural byproducts of good-intentioned decisions on the part of its authors. If they had done things differently, they would have just had other problems.

There is no way to get a perfect survey on a subject like sexual assault. Anyone who claims they have one isn’t arguing in good faith.

First off, let’s talk about the dataset. I’ve already snuck in one issue with it: the choice of universities. The authors only looked at two institutions in the country, and while they were geographically distinct, they were demographically similar. They were both large, with 30,000 and 35,000 students each. The results may therefore not be representative of the experience of significantly smaller universities. While there are counterparts which HAVE looked at these colleges and found similar numbers, with a smaller college comes a smaller sample to draw on, resulting in noisier data. You can mitigate this somewhat by including even more universities, but because of the significant overhead involved, most papers either use a smaller sample or make do with a lower response rate. More on that later.

The other issue is that they excluded all students under the age of 18. They kinda had to: otherwise they’d need to get parental consent for those people to respond. I’ve heard credible arguments that this exclusion could bias the results towards overestimating AND underestimating the prevalence. It’s hard to say. Either way, their absence is significant: between them and other groups excluded from the study, only half the enrolled in either university were ever gonna be included in the data. With no information on the other 50% at all, it’s hard to say what effect, if any, this might have.

3. Shortcomings in the Procedure

The authors of this study didn’t fly out to these colleges and personally interview over 6,000 students. They sent each participant a survey via email and had them fill it out online. Data collection of that form tends to get a low response rate. After all, how likely are you to respond to a random email asking you to fill out a questionnaire? And indeed, that’s what we see: response rates of about 40% at both universities, higher for women and lower for men.

That would be fine, if who responds to a survey and who doesn’t were random. But we know that isn’t true. Racial and ethnic minorities consistently under-respond to polls of all forms, and online polls in particular tend to include more people for whom the subject matter is relevant. That factor can lead to significant and at times catastrophic overestimates of relatively rare phenomena.

Put another way: if you have been sexually assaulted, you are more likely to be interested in a survey about sexual assault than if you have not been. You’re more likely to read the email, and you’re more likely to fill it out.

There are a lot of conflicting factors here. Victims of sexual assault may be less willing to answer, out of fear that they might be shamed or to avoid having to answer uncomfortable questions. There are any number of ways for the topic to be important to you without being an actual victim. You might know a friend, for instance, or simply be engaged with the topic.

But there are some aspects of the study that suggest there was an effect here. The response rates for men and women were markedly different: 42% for women, and only 33% for men. We also know that men are less likely to be victims of sexual assault. In fact, this is a consistent pattern across the board for studies that found a result somewhere in the 1-in-5 range. They’re mostly online surveys sent to students, and they almost always have a higher response rate among women than men.

Here’s where it gets complicated. There are ways to account for non-response bias, at least partially. The scientists who put this study together used three of those ways.

First, they compared the demographic information in their survey respondents to that of all people who did not respond, and to that of the university as a whole. Wherever there was a demographic discrepancy, they gave more weight in the results to people underrepresented in the survey. For instance, nonwhite students were less likely to respond, so they counted the answers from nonwhite students who DID respond more.

They weighted by four factors: which university they were in, their gender, their year of study, and their race/ethnicity. That list is pretty sparse. Most surveys would get a lot more demographic info on each person, and then figure out what to weight from there. The problem is that it’s hard to balance that extra information with guarantees of anonymity. Especially with a topic as fraught as sexual assault, it’s crucially important that participants don’t feel their answers might get connected back to them. Even without the ethical concerns, it can lead to lower response rates among people who HAVE been assaulted. Surveys without the same dedication to anonymity report significantly lower numbers, sometimes below 1%. So this is kind of a damned-if-you-do situation.

Second, they used something called the “continuum of resistance” model. Basically, it says that whether or not someone is willing to answer a survey isn’t a binary thing: the less likely you are to respond to it, the more likely you are to put off doing it. In other words, the demographics of the people who took the longest to fill out the survey probably match those of the people who didn’t fill it out at all, and their responses are probably similar.

This effect doesn’t always show up, but it looks like it did here. Nonwhite students were more likely to not answer the questions, and also (somewhat) more likely to be a late responder. They found no significant difference in answers between late and early responders, which suggests that whatever nonresponse bias existed was fairly small.

The third method they used is less reliable. Essentially, they did a follow-up survey of all people who didn’t respond to the first one (note: they still knew who did and didn’t respond because respondents got a small cash award and they could see who collected it, though not which responses corresponded to which person), and asked them why they didn’t respond. Most nonrespondents said they’d either never received the emails or weren’t sure if they had, and only a very small number said they didn’t respond because they hadn’t experienced sexual assault.

Personally, I wouldn’t have even included this section in the study. The response rate for this follow-up was abysmal: barely 10%, compared to nearly 40 for the top level. It also will exhibit the same kinds of biases the first one did. For instance, people who would be interested in the first study but just didn’t see it in their inbox will be more likely to respond to the second one than people who weren’t interested at all. I mean, do you want to fill out a questionnaire about why you don’t want to answer another questionnaire?

All in all, the authors of this study were meticulous and honest with their findings. They crafted their study to prioritize the privacy and comfort of their respondents, they were forthcoming about potential sources of error, and they made good-faith efforts to adjust for those sources wherever they could. I’ve read crappy studies, and I’ve read fraudulent studies. This one looks nothing like those.

However, there is only so much the authors can do to adjust for these factors. Their selection of methodology inherently comes with certain errors that are nearly impossible to correct. And while there is an argument that sexual assault victims would also be less likely to respond due to discomfort, the fact that there are many more nonvictims than victims means that even if that were true, the numbers would still probably be an overestimate. While the findings here are valuable, they are not gospel, and it’s likely they are inadvertently highballing it.

3. The Other Options Suck Too Though

Online surveys of university students are not the only way to answer this question. Conservatives often cite two other studies, both done by the government. The first is the FBI Uniform Crime Report, which isn’t a survey at all. It’s a thorough accounting of every crime reported to the police in a given year. They generally find somewhere around 100,000 reported rapes to have occurred each year, total, implying an almost minuscule percentage on campuses.

If you’ve made it this far into the post, you’ve probably already seen the problem with that sentence. The reporting rate for rape is really, really low. Only about a third of rape victims inform the police. And it gets worse. Until 2013, the UCR used the word “forced” in their definition of rape. If it wasn’t forced, it wasn’t counted. That would exclude many cases of coerced sex and even some cases of violent, forced sex (for instance, the people reporting it to the FBI won’t necessarily count marital rape, because people are awful).

One of my first jobs ever was data prep for the sexual assault division of my local District Attorney’s office. Even within the prosecutorial community, the FBI numbers are seen as comically low. We didn’t use them.

Instead, we relied on the National Crime Victimization Survey, the other source conservatives like to draw on. It accounts for the low reporting rate because it’s an actual survey of a randomized sample. It’s done through in-person or phone interviews, both of which significantly reduce the interest-bias you find in their online counterparts (you’re more likely to answer the questions when there’s a person on the other end). And it finds that roughly half a million rapes occur each year. More than the UCR, but it would still be less than 1% for women on campus.

It has its own problems, though. The NCVS generally just asks “have you been raped?” or some variant, which we know from countless other studies doesn’t cover all or even most sexual assault victims. It’s likely that the NCVS is significantly lowballing the numbers as a result. They’ve tried to adjust for that in recent years, but most researchers outside the Bureau of Justice Statistics don’t think they’ve done enough, and I’m inclined to agree. Additionally, because the NCVS is explicitly done by a government agency, survivors will be less likely to respond to them for the same reasons they don’t report their assaults to the police. Think of it as the other side of the 1-in-5 studies. They are equally methodical, but where one errs on the side of overestimating when there’s a tradeoff they have to make, the other errs on the side of underestimating.

There are other studies, using some combination of in-person and phone interviews, online results, and other metrics, and different ways of determining whether or not a subject has been assaulted. Their results are all over the map, but tend to fall somewhere in between the NCVS and the 1-in-5 study. They also tend to fall on the high end of that range, so the real number is probably closer to 1-in-5 than to the <1% the NCVS reports. It could be 10. It could be 15. We can’t be sure.

4. Why We Don’t Have a Perfect Study

By now, you might be thinking “okay, so why don’t we pull together some academics, do in-person interviews at a few dozen representative universities, and get some unimpeachable numbers?” After all, it’s not like any of the issues of these studies are inherent. There’s no law that says only the government can use direct sampling or you have to do everything online if you’re talking to college students.

The real obstacle here is money. Online surveys are prevalent because online surveys are cheap. Email is free, so the main expenses are a few grad students to crunch the numbers, the salary of whoever makes sure the study is ethical, and whatever incentive you give people for participating. That 1-in-5 study probably cost about $75,000.

For in-person or phone interviews, you have to pay people to ask the questions. The more folks in your sample, the more people you have to pay for longer. Then you have to vet those people to make sure they know what they’re doing and won’t influence people’s responses. And you have to pay for travel times to make sure those people get to the various campuses. And you have to figure out how to turn their responses into data for the computer which means either expensive Scantron machines or paying more people for data entry and then there’s the privacy concerns, because HTTPS doesn’t exist in the outside world, so somebody has to oversee the data entry….

You get the idea. All told, a study like that one could easily set you back $15 million. That’s more than the total budget your average Sociology department gets in a year.

There are also ethical concerns. Direct interviews may have a higher response rate, but they can also take an emotional toll on sexual assault victims who will have to discuss their trauma with a complete stranger. Science is not done in a vacuum (except for Astronomy), and you have to be careful not to hurt the very people you are studying in the process of learning from them. Additionally, $15 million is not a small amount of money to throw at a problem. It’s hard to justify spending that much on a fact-finding mission instead of, for instance, paying for every untested rape kit in the state of California. There are better ways to allocate our resources here.

5. Why Is This What You’re Fixated On

These numbers get complicated, but at this point it’s fairly clear that the 1-in-5 statistic is not as reliable as we assume it is. It’s probably too high (note: while it’s less likely, it could also be too low), and when accounting for systemic errors it’s probably somewhere in the 1-in-10 to 1-in-6 range. Where you think it lands depends a lot on what specific choices your preferred researchers made when handling the technical details of their study. Even the 1-in-5 authors believe in a much more nuanced take on the data.

That’s a good thing. Your average discourse in the media and in our political forums will always be more simplistic than the careful quantitative analyses of peer-reviewed journals. Scientists and scientific studies will disagree with each other based on their particular decisions over their particular methodologies. And while we don’t know for sure what the percentage is, we’ve narrowed it down quite a bit.

Specifically, we’ve narrowed it down to “too damn high”. 1 in 5 is too damn high. 1 in 10 is too damn high. 1 in 20 is too damn high. Even the more conservative studies outside the NCVS give staggeringly high totals of sexual assaults in our universities. We may not know exactly, quantifiably how bad the problem is, but we know that it’s bad, and warrants immediate action.

But the critics of this study seem to think otherwise. They seem to think that if there are flaws in this paper, then there’s no problem at all. They believe that because the studies we cite can’t guarantee us total certainty, there is no value in what they say. It is the worst sort of scientific illiteracy. Even if you allow for significant errors, and if anything I’ve been too harsh on the original paper here, the numbers would STILL be staggeringly high. You could assume that there was not a single sexual assault victim in either of the two universities who didn’t fill out that survey, and you’d STILL find that about 3% of women were assaulted during their time there.

The science of accounting for sexual assault on campus is tricky and imprecise. There is a lot of room for careful critique of the numbers we have, and many questions for which we don’t yet have answers. But don’t let those uncertainties become a smokescreen for what we do know.

Megarachne

Forty years ago, scientists revealed a fossil of the largest spider that had ever lived. While never reaching the fame of a T-Rex, the foot-long arachnid became a mainstay of natural history museums around the world, unparalleled in its ability to creep out and disturb children and parents alike.

Until the moment it wasn’t.

Decades later, we were told that everything we thought we knew about this animal was incorrect. For twenty-five years, the museums, the documentaries, and the general paleontological community all got it wrong. How did they mistake the creature the first time, and why didn’t anyone notice until 2005?

It all begins in 1980. An Argentinian paleontologist named Mario Hünicken quietly announced an extraordinary discovery. He had found a fossil in the sediment near Baja de Feliz which dated back to the late Carboniferous, 300 million years ago. It was a foot long, and appeared to show most of the animal’s body and three of its legs.

It looked like this.

Megarachne

He named it “Megarachne”, or “Giant Spider”. It did the term justice. Based on this fossil, Megarachne was an ancient ancestor of modern-day tarantulas. With a length of 13 inches and a legspan of over 20, this specimen dwarfed even the largest spiders today, the Giant Huntsman and the Goliath Bird-Eater.

Using a technique called X-ray microtomography, essentially using X-rays to build a 3-D model of the fossil and reveal otherwise invisible details, Hünicken began to learn more about Megarachne’s biology. There were two visible eye sockets, but also a central protrusion that could be space for more. It had an extensive set of chelicerae (essentially, spider’s jaws) at the front, which we see as a bulbous protrusion. They were uncommonly wide and developed for a spider, and may have been large enough to have substantial pushing power on their own, giving the spider more ability to maneuver even large prey to its mouth.

While unimaginable today, a tarantula of this size would not have been that out-of-place in the Carboniferous Earth. The atmosphere was far more oxygen-rich, which meant that arthropods and insects could grow far larger than they can today. This was an era with dragonflies the size of pigeons and seven-foot millipedes. Megarachne would not have wanted for prey.

It’s unlikely this creature spun webs like the Golden Orb-Weaver. Instead, it would have built funnel-like nests from its webbing and waited for a passing lizard or amphibian or large insect to pass by. Lunging out, it would have used those scoop-like chelicerae to grab the animal, then inject it with paralyzing venom. Or, perhaps even more horrifying, it might have wandered the floors of the lush rainforests which covered the world at the time, stalking its prey like a tiger. If it did, it would have to be careful: even this far into antiquity, the largest predators of the time could be as large as your average bear.

Unfortunately, it was hard to tell too much of this for sure. Megarachne was only known from this one fossil, and while several casts had been made for further study, the original, complete with the hidden details Hünicken uncovered, was essentially lost: sold to an anonymous collector and locked in a vault somewhere. It wouldn’t resurface for another 25 years.

As a result, a great deal of what we knew about this animal was conjecture, based on tiny details only Hünicken and a few others had seen. He wasn’t lying, to be clear: casts of fossils can only show the imprint of the object, which obscures subtle details within the structure.

Hünicken’s microtomography had revealed what appeared to be signature spider traits within the partial fossil. Cheliceral fangs which could potentially deliver venom, even a sternum (the underside of the abdomen, when used in arthropod anatomy). They weren’t as well preserved as the rest, so you had to extrapolate a bit, but it looked like they were there. And besides, the shape of the animal was clearly that of a proto-tarantula.

And yet, there were some doubters. Dedicated arachnologists pointed out a number of inconsistencies between Megarachne and other spiders, most notably the suture between the abdomen (back end) and cephalothorax (front end). Sutures are basically immobile joints, and in humans only exist in our skull. It meant that this spider wouldn’t be able to bend its body the way tarantulas today do, like this:

spider bend

You can even see the suture for yourself in the cast at the top of this post: it’s directly above the curved line around the abdomen. There’s a white spot in the middle. It’s more visible in the original fossil, but you can make it out if you look closely.

Even Hünicken himself acknowledged the discrepancy, along with a few others. But they were easily explained away. 300 million years is a long time, after all.

Meanwhile, Megarachne was busy going as viral as a Carboniferous-era arthropod could go. We’ve always had a weakness for giant spiders, and here was a genuine monster. This thing had more than its fair share of museum displays, an unusual trait for its era. Before the dinosaurs, but after the absolute freak show of the earliest animals, the Carboniferous had one thing going for it in terms of public appeal: the giant dragonfly. In fact, much of the time before the dinosaurs gets skipped over when discussing the extraordinary variety of life on Earth, but I digress. The point is, by the 1990s the museums were in the pocket of Big Megarachne.

 

Let’s recap. The creature we’re talking about would have looked something like this:

mn5azc6dxryx

That image is to scale, of course. It doesn’t look like a modern-day tarantula. The body is fused into one big chunk, the suture at work. You can also see the spatulate chelicerae: they’re the two giant growths below its eyes. The smaller limbs between them and the legs are called pedipalps. They’re used to help maneuver prey while eating, and also as tongues and noses. And also penises sometimes. Spiders are weird.

And for about 25 years, that was how it was. Megarachne was a bizarre ancestral spider, made gigantic by an oxygen-rich atmosphere and sporting a set of fangs the size of lightbulbs. A few spider-specialists in the community grumbled that it might have been a Ricinuleid or Solifuge, but for the most part, it was accepted.

And then everything changed.

In 2004, another fossil was found in the same rock formation. It was unquestionably Megarachne: one telltale feature was the identical, and unusual, eye formation. It also looked a lot less like a spider. It’s the middle one in this picture:

rsbl20040272f01

By February of the next year, a new team of Paleontologists, advised by Hünicken himself, published a new paper: Megarachne was not a spider. It was a eurypterid.

The Eurypterids have long since vanished from the Earth, so we don’t have any experience with them like we do for spiders. For that, we should be eternally grateful. Also called “sea scorpions”, Eurypterids dominated the seas from about 450 to 300 million years ago, and lasted for a long time after that. They were filter feeders and apex predators, and ranged from a few inches long to the size of an American Alligator. They would have looked something like this:

pterygotus 2

In a single paper, Megarachne lost all its mojo. Not only was it no longer a record-holding behemoth, it was fairly small for its order. It was also not a developed ambush predator: it fed itself by swimming through riverbeds, using its many arms to capture the tiny invertebrates that lived in the mud and silt.

The scientific community didn’t question the results. The evidence was blatantly obvious, and Hünicken himself had co-authored the paper. Indeed, the speed with which the consensus on the animal changed is an example of science’s greatest quality: the ability to recognize when it is wrong and self-correct.

Yet the legacy of their mistake lives on to this day. Species names are hard to change once they’ve been assigned, so Megarachne retained its name. There is now a bottom-feeding eurypterid, not unlike a lobster, whose name directly translates to “giant spider”. But this paper also came at an inopportune moment for a much larger entity: the British Broadcasting Company. Some years before, the BBC had aired the hugely successful “Walking With Dinosaurs”, a high-budget docuseries narrated by Kenneth Branagh that won three Emmys. It in turn spawned a sequel, “Walking with Beasts”, that catalogued the time between the Cretacious extinction and today, and a prequel, “Walking With Monsters”, which would air later that year.

“Walking With Monsters” is a masterpiece of the genre, and I encourage anyone interested to watch it. Like its predecessors, it’s rivaled only by big-budget action movies in the quality of its special effects, but with a degree of accuracy unparalleled by any cinema. The producers consulted 600 paleontologists, paleobotanists, geologists, and even astronomers to ensure that its depiction of a billion-year story was scrupulously accurate to the scientific consensus. They devoted segments to every era of life’s history before the dinosaurs, and with each one showed not only the path of evolution in our earliest ancestors but also the signature creatures of each epoch. And when it came time to pick the signature animal for the Carboniferous, there was only one natural choice: the largest spider that ever lived.

Seldon, Corronca, and Hünicker published their paper several months before “Walking with Beasts” aired, but it was still too late to change it. Megarachne was the star of the entire segment, there was no way to easily cut it out. Nor could it be replaced. They could find a new animal, but that would mean consulting all the experts again, even more money for the special effects budget, a new script for an entirely different animal, hauling Kenneth back into the soundstage, and countless other barriers. It was either cut a hundred million years from the story of life on Earth, or bite the bullet and air the episode.

The BBC opted for the latter. It is, to my knowledge, the only time it has ever knowingly and intentionally aired fake news.

And so, despite being left behind by science, Megarachne lives on, not in the literature or the museum exhibitions, but in the minds of a generation of impressionable science nerds who saw it fight for survival on the television.

I am not ashamed to say that I am one of them.