Data sharing, reproducibility and peer review

I just reviewed my first manuscript where the authors provided a reproducible analysis (i.e., they shared their data and analysis script with the reviewers). This is something my coauthors and I have tried to provide with our recent studies, but it was my first time experiencing it as a referee.

I think it really helped, but it also raised new questions about traditional peer review.

Continue reading →

Science is flawed. So what?

The results of the Reproducibility Project – a very cool endeavour to repeat a bunch of published studies in psychology – came out this week [1]. The authors (a team of psychologists from around to world) found that they were able to successfully replicate the results of 39 out of 100 studies, leaving 61% unreplicated. This seems like an awful lot of negatives, but the authors argue that it’s more or less what you’d expect. A good chunk of published research is wrong, because of sampling error, experimenter bias, an emphasis on publishing surprising findings that turn out to be false, or more than one of the above. No one study can ever represent the truth – nor is it intended to. The idea is that with time and collective effort, scientific knowledge progresses towards certainty.

So science crowd-sources certainty.

Continue reading →