How a Paper Proving ESP is Real Upended Social Science Research

I recently re-read a lengthy but excellent 2017 article from Slate by Daniel Engber that details the story of social psychologist Daryl Bem, author of the controversial 2011 paper from the Journal of Personal and Social Psychology, “Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect,” better known as the research paper which displayed strong experimental evidence from nine studies including over 1,000 subjects that so-called “psi” phenomena like extra-sensory perception (ESP) are actually, provably real.

woman in gray spaghetti strap top
Photo by pawel szvmanski on Unsplash

If you’ve never heard of this paper or the storm of activity that followed in its wake, you should really read the Slate piece, because it’s not only an excellent and engaging read, but it also provides a well-rounded look at the many problems that social sciences research has faced over the last decade, particularly in the field of psychology. Bem’s study was something of a puzzle when it was released because it was both methodologically sound and proving something that ought to be impossible. Some took it at face value and suggested that perhaps there really is some tricky evidence that human beings experience psi phenomena like precognition and ESP. But others took it as a sign that something was really, deeply flawed about how social scientists were collecting and analyzing research data, and this led to an enormous skeptical movement in the social sciences questioning many established and highly-cited studies that had a major problem: they couldn’t be reproduced.

The article notes that the wildfire spread to many other fields, including medicine, neuroscience and chemistry, and even the Wikipedia article for the Replication Crisis contains criticisms of additional fields like economics, sports science, water resource management, pharmaceuticals and, yep, marketing.

The social sciences are still reeling from this crisis. Initiatives like the Many Labs 2 project have attempted to replicate many major studies with varying results (here’s a good summary of their progress and challenges), but even when studies can’t be replicated, there are often counter-arguments about why the replication failed.

The biggest problem has to do with the fact that the tools used for research and experimental design can be easily – and sometimes unintentionally! – gamed with QRPs (questionable research practices), with statistical oversights and with artifacts of imperfect research design which compound when the studies are repeated with slightly tweaked conditions. This is not because social scientists are not trying to be rigorous, but rather because they are often trying to generalize behaviors in a population which contains a high level of variability and any attempt at experimental design is bound to overlook, ignore or assume some factor which may have a major impact on the findings.

green and brown trees
Photo by Rosalind Chang on Unsplash

Another problem is the very nature of academia, which stresses publishing results in qualified journals which are themselves prone to bias and human error. Journals tend to desire articles which are novel and which challenge the conventional understanding over those which describe what’s already known, and journals also tend to focus on publishing studies with positive results rather than negative ones. This creates huge incentives for academic researchers to either design contrived scenarios that have a higher chance of success or to mask their actual research process to make it seem like every choice they made was a result of careful planning instead of altering their method of data collection or analysis midway through.

Fortunately, one reproducibility study has found that scientists who work in a field are generally pretty good at knowing which studies are reproducible and which ones are not. People who spend a lot of time conducting and reviewing research generally know a weak or questionable study when they see it. The problem is that knowing a study is flawed and showing that it’s flawed are two entirely different things, particularly in a world where unusual scientific findings are all too often trumpeted in a press release and reported all over the sensational edges of the media before they bear the scrutiny of the broader scientific community.

And for those of us who are applied researchers rather than academics, it’s a sober reminder that we should always be vigilant about utilizing research as a tool to provide information instead of declaring each study or analytic process an oracle that provides unquestionable answers. It’s easy to forget that even well-designed research can result in Type I or Type II errors which can only be uncovered through repetition, and in our zeal to get to insights, we may overlook the many, many opportunities for confirmation bias and personal desire for success to lead us to bad assumptions that ultimately render our research useless.


We hope this article has been helpful to you, and we want you to know that we’re here to be a resource however we can be on anything you’d like to know about marketing research!

Please feel free to check out our other articles, watch our Youtube Channel, connect with us on LinkedIn or Facebook or to contact us. We’d love to hear from you!