Extrasensory perception - proven by peer review?

Author: Emily Johnson

Do you believe in ESP (Extra Sensory perception)? The Journal of Personality and Social Psychology apparently does. It has published a paper on ESP; its author, Daryl J. Bem of Cornell University, claims to have proved that ESP exists.

The current scientific framework gives us no plausible scientific explanation for how ESP would work. If it does exist, it would force us to question our entire scientific outlook. It would force a revolution, in science and in human thought, as great as any that has ever gone before. But the paper appears in a proper peer-reviewed journal, and a respected one, so the claim must be true. Mustn’t it?

Zener cards used in the early twentieth century for experimental research into ESP.

Zener cards used in the early twentieth

century for experimental research into ESP.

Well, not necessarily. The paper’s ‘proof’ of ESP is that it is shown to be significant at the 5% level. This is indeed the level that is normally accepted for any scientific paper to have reasonably demonstrated its claimed result. However, I put the word ‘proof’ in inverted comas because 5% significance is not remotely the same thing as proof. 5% significance means that just five times out of 100 - one time in 20 - we would expect to see this result by chance. In other words, if a journal publishes 20 scientific papers, all claiming to have ‘proved’ some relationship, one of those ‘proofs’ will be accidental, not real, and caused merely by coincidence. Do the experiment again, and it is unlikely to give that same answer.

As a guide to that one-in-20 level, that particular issue of the journal, the March 2011 one, (Vol 100(3), Mar 2011, 407-425. doi: 10.1037/a0021524) contained 12 scientific papers. So we would expect, on average, one of them to be wrong. The journal is monthly, so it publishes about 144 papers a year. If all claim significance at the 5% level, we would expect about seven of those papers to be wrong – wrong in the sense that their results are untrue or unproven. The situation is exactly the same for other peer-reviewed journals. So for a paper to be published in a peer-reviewed journal does not mean that its result has been proven.

The journal itself of course is well aware of this, and does not seem convinced of the reality of ESP, even though it published the paper; in the same issue, and immediately following, it published a rebuttal of the claim. The whole thing has caused a furore over whether or not the article should have been published in the first place.

Numerous other articles have been published recently which have also caused a stir about whether or not they have been reviewed properly. Obviously no review process is going to be infallible but the current one has allowed articles such as that ‘proving’ a link between MMR and autism to slip through – an article which caused dreadful worry to parents, a huge decline in vaccinations, real risks to the lives of children, and possibly some deaths; and this even though the link was entirely spurious. So is the current peer-review system the right one?

Incidents like these raise questions about the peer review process and whether in an increasingly online world, the way that papers are peer reviewed should be changed.

Was it right that the ESP paper should have been published? The Journal of Personality and Social Psychology is a scientific journal, and intended to be critically read by those who have been trained in the area. No scientific article is a definitive view of any area of research designed to be read in isolation.

By having access to this article scientists can now attempt to replicate what has been done on ESP. If the results are not reproduced then it can be relegated to chance. If on the other hand the results are replicated it may be time to revise our theories. This, in my opinion, is the reason this paper should be published. If it is replicable, then we will have to revolutionise our way of thinking. Science needs to investigate all claims without the bias of assuming that the current state of affairs is the correct one.

But, is statistical significance enough to justify publication and is a reputable journal the place for this to be published? The readers of journals expect well-researched articles based on firm evidence, and that this quality has been established by the peer review process.

A potential compromise would be to publish online, initially allowing online comment etc. before putting into print thereby allowing scientific peers to comment and providing a potentially more thorough peer review.

Journals like the BMJ and the Annals of Applied Statistics have tried online with varied success. Christopher Martyn (associate editor of the BMJ) has complained that some of the responses that have been received are not well considered and warns of the "danger when the heat of the moment coincides with the availability of instant communication."

It is hard to judge impartiality and get the balance between the passionate beliefs and responses from people who have detailed in-depth specialized knowledge of the subject. The job of the journal editor is to sift through this and make a decision about whether or not it is strong enough to be published in the journal, particularly when the subject is contentious. It is this balance which should thus allow journals to hold their reputations.

Bookmark and Share

Comment on this article

Submit your comment
  1. Image of unique ID


James Lawrence

" if a journal publishes 20 scientific papers, all claiming to have ‘proved’ some relationship, one of those ‘proofs’ will be accidental, not real, and caused merely by coincidence."

This isn't true, because negative results are not usually reported. As an example:

Suppose that 400 scientists decide, independently, to investigate whether there is a toaster orbiting the moon. They perform their 95% tests, and in line with expectation, 20 of them get a significant result. Accordingly, the Journal of Orbiting Toasters receives 20 papers, ALL of which have apparently proved something which is false.

reply to this comment

Mikhail Simkin

If they do it at random, than it is enough to submit to five journals to publish anything.


 This particular journal rejects >80% of all submissions.

reply to this comment

Daryl Bem

A thoughtful set of comments.  One major correction, however:  The level of statistical significance for the 9  experiments reported in the published article was not just at the threshold of  5% (i.e. < .05), but .000000000013, which means that the odds that the results were NOT due to chance are greater than 74 billion to 1.  As is always true in science, this still does not "prove" the existence of the claimed phenomenon, but it pretty convincingly removes "chance" as the explanation of the results.  The 4 reviewers and 2 editors who vetted the article were not required to believe that the results proved ESP but were charged with evaluating the soundness of the experimental design and the statistical analysis.  This particular journal rejects >80% of all submissions.

This is also NOT the first article to proffer experimental evidence for ESP, but only the most recent.  For example, this article examined the special case of ESP known as precognition, and there is a published analysis of 309 previous precognition experiments conducted by 62 different investigators and involving more than 50,000 participants.  As in science generally, one's degree of belief in a phenomenon should not rest on a single experiment or set of experiments.  That's why a review of the existing literature is a required part of any article submitted for professional publication.

reply to this comment

Skip to Main Site Navigation / Login

Site Search Form

Site Search