alt text The current state of psychological research: fun to look at it, but offers only a warped sense of reality.

I recently listened to the latest episode of the Black Goat podcast (Nullius in verba) where their main topic was about the role of trust in science. The background to their discussion is that some researchers are showing resistance to the recent calls to pre-register the methods in published papers and having open data. The murmurs are something along the lines of “we’re offended that you cannot take our word that we were honest with our analyses, and we shouldn’t need to share the data for you to believe the evidence we are presenting”. The consensus from the Black Goat team was that science should not rely on trust, and referred to the Royal Society’s motto ‘Nullius in verba’ or ‘Take nobody’s word for it’. My view largely parallels their discussion, but I thought I would take the opportunity to outline my view of the role of trust in science, and how it fits in with the recent developments in open science.

To cut a long story short, trust meaning ‘take my word for it’ has no place in science. Yes, it would be great to believe every article is a perfect representation of the study that was performed. However, it sadly isn’t an ideal world and as Chris Chambers outlines in his important new book (The Seven Deadly Sins of Psychology), the goal of science is at odds with making a career out of science. Researchers mostly publish positive results (Fanelli 2010) and they dress up their findings to make them as flashy as possible to ensure maximum publishability. This means that rather than honestly and transparently reporting their research, fields of research more closely represent a fun house mirror that makes everything warped. You can kind of see the result in the background, but the face and body is all distorted. In addition, the troubles facing psychological research have been the focus of numerous articles in the past few years and the issues have reached the public eye (e.g. a Guardian article). This means that the one thing that science is lacking is trust. Responding to a lack of trust means taking a real hard look at our distorted selves in the mirror, and there are groups of progressive researchers not liking what they see.

From here, taking steps to increase trust firstly from other researchers and then from the public (how are we supposed to convince others if we can’t convince ourselves) typically gravitates toward making things more transparent. This has historically been seen in clinical trial research where a series of high profile cases brought its integrity into question and attracted public attention that doesn’t class as outreach. This was followed by a series of measures to increase trust such as mandatory trial registers (learn more at AllTrials.net. More recently, psychology and biomedical science has gone through a similar process where the deficiencies reach the public’s attention. Like the response in clinical trials, psychologists need a way to increase the confidence other researchers and the public can place in their studies. This has led to several open science initiatives such as pre-registration, the PRO initiative, and open data.

This brings us back to trust and confidence in science. Expecting readers to trust your article in the sense of ‘take my word for it, this finding is totally legit’ is not the way forward. Instead, we should be focusing on increasing the confidence readers have in our findings. Image a court case where someone has allegedly committed a crime and their lawyer is there to defend them. A lawyer could not stand there and say her client is innocent but she cannot show you the evidence, take her word for it. Likewise, eyebrows would be raised if she said you do not need to look at the evidence yourselves, I have summarised it all in this tidy compelling paragraph. To make a solid case, you need to present all the evidence for people to appraise it themselves. Bringing the analogy back to science, this necessitates that you should make people confident in your results by showing them you had planned to do it this way all along, and we can see the results aren’t the result of a mistake or misinterpretation.

Instead of expecting people to take our word for it that we have done some good science, we should be outlining the precautions we have taken to show that the results are robust. We should be pre-registering our research (it may not be perfect the first time around, but here’s how you can learn from my mistakes), allowing others to see the data, and focusing on the reproducibility of results so people can appraise your methods. Building on these initiatives, hopefully we can ensure that the reader can be confident in the conclusions that we made, or give them the opportunity to show us why they were wrong. We should not be expecting people to trust our results unless they can judge it for themselves. Frankly, given the current concerns around research in psychology, we are not in a position to demand people’s trust. Science is often said to be self-correcting, but it can only be corrected if we can see what went wrong in the first place and have the opportunity to change it.