Thanks for posting this, Zachariah McNaughton. There are two parts to this reply specific to your comment:

David Lisak I trust is an expert in his field. But I also suspect he manipulates data — which he’s been criticized publicly for — to formulate a particular narrative that gets him college speaking events at places like my alma mater and a paid expert in sexual assault court cases.

David Lisak Data — He Minimizes the True “False” Rate of Rape Claims

Lisak wrote 5.9% of 136 cases were “false” accusation, but Cathy Young below shows how that may be misleading:

A similar pattern can be found in a recent study often cited as evidence of the rarity of false accusations: a 2010 paper by psychologist David Lisak, which examined all 136 sexual assault reports made on a northeastern university campus over a 10-year period. For 19 of these cases, the files did not contain enough information to evaluate the outcome. Of the 117 cases that could be classified, eight — or 6.8 percent — were determined to be false complaints; that conclusion was reached when there was substantial evidence refuting the complainant’s account. But does it mean that 93 percent of the reports that could be evaluated were shown to be truthful?

More than 40 percent of the reports evaluated in Lisak’s study (excluding the ones for which there was not enough information to classify them) did result in disciplinary or criminal charges. However, 52 percent were investigated and closed. Lisak told me that the vast majority of these complaints did not proceed due to insufficient evidence, often because the complainant had stopped cooperating with investigators. His paper also mentions another type of complaint that did not proceed: cases in which “the incident did not meet the legal elements of the crime of sexual assault.” Lisak was unable to provide any specifics on these incidents. But, in other known cases, such allegations stem from conflicting definitions of what constitutes rape and consent — particularly in sexual encounters that involve alcohol.

This section I found illuminating. It inspired me to take a slightly different approach in running Lisak’s sexual assault data by separating “KNOWN” outcomes from “UNKNOWN” outcomes, the latter being cases where no determination was made.

Psychologist David Lisak’s study found that about 6.8% of 117 cases were conclusively false. But his data also shows 52% were found to have insufficient evidence. Lisak’s error, I believe, is dividing ALL cases with only the KNOWN FALSE outcomes. I argue he should only run his numbers within the KNOWN set exclusively. So let’s tabulate those numbers quickly:

I’ll show now exactly how I broke down the data. Here are the numbers and categorization by Lisak (I flag “UNKNOWN” or “KNOWN” as a separate category):

So looking above you can total the “known” outcomes to get to 56. Divide by 8 “false” reports and you get 14%. I reckon this is close to the “false” rate from this fairly large sample. It’s not perfect, but I argue it’s a more fair analysis than Lisak performed.

I pose that Lisak himself and others who use his data misrepresent what the “false claim” percentage is by putting it as low as 5.9%. I’m tempted to say this is done for political reasons, and few want to push back on such statistical manipulation.

Put another way: Would you say 6% of 100 women have red hair if only able to “see” behind the curtain of half (50) of the examples and see 6 women with red hair? No. Basic common sense would tell you there are a few redheads in the other half, especially if randomly grouped. But what Lisak and others do is say “No, only 6% of these 100 women are redheads. End of story.” I leave to others to say if this is dishonesty or merely an oversight.

Again, this is according to a man called into rape trials to debunk the “women are lying” myth and to state how rare false reports are. Lisak’s name came up quite often in Jon Krakauer’s acclaimed book about the college rape crisis Missoula, which I read four years ago. More than half the cases (52% of the 117 cases) did not move forward due to insufficient evidence, according to Lisak. Thus, it’s possible the percentage could be even higher than 14%.

Inarguably, more work on this needs to be done; but unsurprisingly, given hot taboo this topic is, few new conclusions have been rendered.

Even while writing this comment (sorry it’s long), I found this critique of David Lisak in Reason, where Ms. Young often writes herself. I admit that four years after reading the book above that I’m a tad more skeptical of its narrative. Certainly, police and our institutions can do better at not stigmatizing victims of sexual assault and providing justice. But we shouldn’t be pushing false data or narratives to fit a preordained narrative.

Q: Why isn’t this more well known? I dug down and found this solid evidence that 1 inf 4 rape accusations could be false. From your source:

https://www.ncjrs.gov/txtfiles/dnaevid.txt

Some already have used the cases discussed in this
report to argue that hundreds more innocent
defendants are in prison. They contend that the
current “exclusion” rate for forensic DNA labs —
close to 25 percent — suggests that a similar
percentage of innocent defendants were wrongly
convicted before the availability of forensic DNA
typing. Unfortunately, too many variables are
contained in the “exclusion” rate to draw any
meaningful conclusions from it. Furthermore,
nothing about the cases reviewed here necessarily
supports such a conclusion.

The only clear conclusion that can be drawn is that
this new technology can be used within the existing
legal framework to undo past injustices. In other
words, both the science and the legal system worked
in these cases! This report provides additional
insights into how such cases can be identified in
the future.

AND

Every year since 1989, in about 25 percent of the
sexual assault cases referred to the FBI where
results could be obtained (primarily by State and
local law enforcement), the primary suspect has
been excluded by forensic DNA testing.
Specifically, FBI officials report that out of
roughly 10,000 sexual assault cases since 1989,
about 2,000 tests have been inconclusive (usually
insufficient high molecular weight DNA to do
testing), about 2,000 tests have excluded the
primary suspect, and about 6,000 have “matched” or
included the primary suspect.1 The fact that these
percentages have remained constant for 7 years, and
that the National Institute of Justice’s informal
survey of private laboratories reveals a strikingly
similar 26-percent exclusion rate, strongly
suggests that postarrest and postconviction DNA
exonerations are tied to some strong, underlying
systemic problems that generate erroneous
accusations and convictions.

--

--

--

Writer. Researcher. Designer. Human seeking better outcomes. Also searching for relevant facts and logical arguments above expedient or politial narratives.

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
David Shuey

David Shuey

Writer. Researcher. Designer. Human seeking better outcomes. Also searching for relevant facts and logical arguments above expedient or politial narratives.

More from Medium

Body and Mind- A Daily Guide to Getting Shit Done

Childhood Memories of A

WOM UAceIt — Project Report and Experience