David Shuey
6 min readApr 12, 2020

--

Thanks for posting this, Zachariah McNaughton. There are two parts to this reply specific to your comment:

  1. I confirm the David Lisak data, but offer my own re-evaluation of it (which I believe I sent to Cathy Young before, or commented on one of her articles).
  2. I pull the key quote from that National Criminal Justice Reference Service (NCJRS) source you mentioned. It’s so shocking it leads me to simply ask, “Why isn’t this more well known?” I’ve done some research on this topic of false allegations of sexual assault (and also the poor treatment of women in the process often, too), but I’ve never came across this source. It appears to indicate that 25% of rape kits tested for DNA may very well exonerate the accused. If you or anyone else has feedback on where you’ve seen it or other general takeaways, feel free to pass along.
David Lisak I trust is an expert in his field. But I also suspect he manipulates data — which he’s been criticized publicly for — to formulate a particular narrative that gets him college speaking events at places like my alma mater and a paid expert in sexual assault court cases.

David Lisak Data — He Minimizes the True “False” Rate of Rape Claims

Lisak wrote 5.9% of 136 cases were “false” accusation, but Cathy Young below shows how that may be misleading:

A similar pattern can be found in a recent study often cited as evidence of the rarity of false accusations: a 2010 paper by psychologist David Lisak, which examined all 136 sexual assault reports made on a northeastern university campus over a 10-year period. For 19 of these cases, the files did not contain enough information to evaluate the outcome. Of the 117 cases that could be classified, eight — or 6.8 percent — were determined to be false complaints; that conclusion was reached when there was substantial evidence refuting the complainant’s account. But does it mean that 93 percent of the reports that could be evaluated were shown to be truthful?

More than 40 percent of the reports evaluated in Lisak’s study (excluding the ones for which there was not enough information to classify them) did result in disciplinary or criminal charges. However, 52 percent were investigated and closed. Lisak told me that the vast majority of these complaints did not proceed due to insufficient evidence, often because the complainant had stopped cooperating with investigators. His paper also mentions another type of complaint that did not proceed: cases in which “the incident did not meet the legal elements of the crime of sexual assault.” Lisak was unable to provide any specifics on these incidents. But, in other known cases, such allegations stem from conflicting definitions of what constitutes rape and consent — particularly in sexual encounters that involve alcohol.

This section I found illuminating. It inspired me to take a slightly different approach in running Lisak’s sexual assault data by separating “KNOWN” outcomes from “UNKNOWN” outcomes, the latter being cases where no determination was made.

Psychologist David Lisak’s study found that about 6.8% of 117 cases were conclusively false. But his data also shows 52% were found to have insufficient evidence. Lisak’s error, I believe, is dividing ALL cases with only the KNOWN FALSE outcomes. I argue he should only run his numbers within the KNOWN set exclusively. So let’s tabulate those numbers quickly:

  • 6.8% of 48% is = 14% of KNOWN outcomes are decidedly false.

I’ll show now exactly how I broke down the data. Here are the numbers and categorization by Lisak (I flag “UNKNOWN” or “KNOWN” as a separate category):

  • 8 False reports (KNOWN)
  • 48 Cases proceeded to prosecution or disciplinary action (KNOWN)
  • 61 Cases did not proceed to prosecution or disciplinary action (UNKNOWN)
  • 19 Insufficient evidence to be coded (UNKNOWN)

So looking above you can total the “known” outcomes to get to 56. Divide by 8 “false” reports and you get 14%. I reckon this is close to the “false” rate from this fairly large sample. It’s not perfect, but I argue it’s a more fair analysis than Lisak performed.

I pose that Lisak himself and others who use his data misrepresent what the “false claim” percentage is by putting it as low as 5.9%. I’m tempted to say this is done for political reasons, and few want to push back on such statistical manipulation.

Put another way: Would you say 6% of 100 women have red hair if only able to “see” behind the curtain of half (50) of the examples and see 6 women with red hair? No. Basic common sense would tell you there are a few redheads in the other half, especially if randomly grouped. But what Lisak and others do is say “No, only 6% of these 100 women are redheads. End of story.” I leave to others to say if this is dishonesty or merely an oversight.

Again, this is according to a man called into rape trials to debunk the “women are lying” myth and to state how rare false reports are. Lisak’s name came up quite often in Jon Krakauer’s acclaimed book about the college rape crisis Missoula, which I read four years ago. More than half the cases (52% of the 117 cases) did not move forward due to insufficient evidence, according to Lisak. Thus, it’s possible the percentage could be even higher than 14%.

Inarguably, more work on this needs to be done; but unsurprisingly, given hot taboo this topic is, few new conclusions have been rendered.

Even while writing this comment (sorry it’s long), I found this critique of David Lisak in Reason, where Ms. Young often writes herself. I admit that four years after reading the book above that I’m a tad more skeptical of its narrative. Certainly, police and our institutions can do better at not stigmatizing victims of sexual assault and providing justice. But we shouldn’t be pushing false data or narratives to fit a preordained narrative.

Q: Why isn’t this more well known? I dug down and found this solid evidence that 1 inf 4 rape accusations could be false. From your source:

https://www.ncjrs.gov/txtfiles/dnaevid.txt

Some already have used the cases discussed in this
report to argue that hundreds more innocent
defendants are in prison. They contend that the
current “exclusion” rate for forensic DNA labs —
close to 25 percent — suggests that a similar
percentage of innocent defendants were wrongly
convicted before the availability of forensic DNA
typing. Unfortunately, too many variables are
contained in the “exclusion” rate to draw any
meaningful conclusions from it. Furthermore,
nothing about the cases reviewed here necessarily
supports such a conclusion.

The only clear conclusion that can be drawn is that
this new technology can be used within the existing
legal framework to undo past injustices. In other
words, both the science and the legal system worked
in these cases! This report provides additional
insights into how such cases can be identified in
the future.

AND

Every year since 1989, in about 25 percent of the
sexual assault cases referred to the FBI where
results could be obtained (primarily by State and
local law enforcement), the primary suspect has
been excluded by forensic DNA testing.
Specifically, FBI officials report that out of
roughly 10,000 sexual assault cases since 1989,
about 2,000 tests have been inconclusive (usually
insufficient high molecular weight DNA to do
testing), about 2,000 tests have excluded the
primary suspect, and about 6,000 have “matched” or
included the primary suspect.1 The fact that these
percentages have remained constant for 7 years, and
that the National Institute of Justice’s informal
survey of private laboratories reveals a strikingly
similar 26-percent exclusion rate, strongly
suggests that postarrest and postconviction DNA
exonerations are tied to some strong, underlying
systemic problems that generate erroneous
accusations and convictions.

--

--

David Shuey

Writer. Researcher. Designer. Human seeking better outcomes for all. Empiricism, relevant facts, and logical arguments > simple narratives.