Troubleshooting and iteration in science

The scientific method is taught as far back as elementary school. But students almost never get to experience what I think is the best part: what you do when something goes wrong. That’s too bad because self-correction is a hallmark of science.

In ecology and evolution, most graduate students don’t get to experience iteration firsthand, because they are often collecting data right up until the end of their degree. I didn’t experience it until my postdoc, when we failed to repeat a previous experiment. It took several experiments and a lot of time  – two years! – to figure out why. In the end, it was one of the most rewarding things I’ve done.

Wouldn’t it be great if undergraduate students actually got to do this as part of their lab courses (i.e., revise and repeat an experiment), rather than just writing about it?

One thing that can come close – teaching you how to revise and repeat when something doesn’t work – is learning to code.

Science is flawed. So what?

The results of the Reproducibility Project – a very cool endeavour to repeat a bunch of published studies in psychology – came out this week [1]. The authors (a team of psychologists from around to world) found that they were able to successfully replicate the results of 39 out of 100 studies, leaving 61% unreplicated. This seems like an awful lot of negatives, but the authors argue that it’s more or less what you’d expect. A good chunk of published research is wrong, because of sampling error, experimenter bias, an emphasis on publishing surprising findings that turn out to be false, or more than one of the above. No one study can ever represent the truth – nor is it intended to. The idea is that with time and collective effort, scientific knowledge progresses towards certainty.

So science crowd-sources certainty.

Continue reading →