Interview questions for science journalists
A worrying number of science “journalists” are woefully incompetent when it comes to knowledge of the basics of the scientific methods they report. Sadly, the majority appear to be fond of uncritically reproducing press releases without reference to the original research. As we wouldn’t accept a political journalist who couldn’t point to China or India on a map of the world, so we shouldn’t accept such obliviousness to subject matter from our science journalists.
Here are some questions so basic they should be on the question sheet at interview for prospective science journalists:
- What is “blind testing”, in particular why double and triple blind should be preferred?
- Briefly talk about the strengths and weaknesses of peer-reviewing.
- Why is it important that scientists detail their methods and data in a paper, in addition to just the results?
- Explain “confounding factors”. Bonus points for a description of ways to control for them.
- Discuss the extent of orthogonality between correlation and causation.
Blind-testing is a way to avoid conscious and unconscious bias during scientific testing, and is therefore essential to producing a trustworthy result. This is especially true in medical trials where the placebo effect is pronounced. There’s a great discussion of blind-testing and scientific testing more generally in A Primer on Scientific Testing.
Peer-reviewing helps to find errors, omissions and possible flaws with papers before they are published, helping to produce more reliable and easier to understand papers. Often problems found can be fixed with updates to the paper—perhaps by explaining the method more clearly or clearly spelling out an assumption—but sometimes the flaws are fatal to the result presented and further research must be done. The Wikipedia article on peer-reviewing isn’t a terrible place to start, though pay close attention to the flagged issues. For further reading, Ars Technica have an interesting discussion of the method’s merits and shortcomings in both the paper and grant realms.
Methods and data need to be published so that the results of the paper can reproduced, and so confirmed or refuted by other scientists. This is essential because it’s possible for errors to sneak into analyses or freak situations to occur by chance, even for the most diligent of scientists. Independent confirmation is therefore required before a result can be trusted. Detailing methods in particular allows others to understand what foundation the result rests upon, and understand biases and limitations therefore inherent in it.
A confounding factor affects the relationship between the thing you are testing and the thing upon which its effects are being measured, thereby distorting experimental results. It’s very important, therefore, to try and mitigate the effects of confounding factors when designing an experiment and evaluating its results. This is called controlling for confounding factors. See below for an excellent example of the serious problems not controlling for confounding factors can cause.
Confusing correlation for causation is the single biggest plague in science journalism. When two things tend to happen together, it does not imply that one causes the other. This is the crux of the problem. It is often associated with confirmation bias, whereby newspapers and other outlets distort studies to match their or society’s world-view by extrapolating a correlation into a causation. An excellent example of this problem was the reporting in 2007 on a study which found that girls apparently prefer pink.
Confounding factors and correlation/causation can be related. On the Wikipedia talk page for confounding factors, there is a good description of confounding factors, correlation/causation problems and an example of how the two can conflate:
A confounder (or confounding variable) is something that is correlated with the independent (causative) variable you are investigating, and causes or prevents the effect (dependent variable) you are investigating. Because it is associated with both of them, it will interfere with the ability of statistical tests to correctly indicate the impact of your causative variable; that is, the confounder will caused biased estimates of the impact of your causative variable.
Note that a true confounder is itself another causative/preventive variable. (Variables that are only correlated with the effect won’t cause confounding.) For instance, drinking and smoking are correlated, people who do one tend to do the other. Today we know that tobacco worsens heart disease, but alcohol is protective against heart disease. Tobacco’s effect is bigger than alcohol’s, so together they cause net harm. Early studies of alcohol use and heart disease indicated that alcohol CAUSED heart disease because researchers had no data on smoking. Once both factors were included (along with other important variables), the truth was understood. In this example, tobacco use was the confounder for early studies investigating the influence of alcohol on heart disease.
Good answers to these questions should reassure prospective employers that their applicant is able to apply a knowledgeable, critical eye to science and its results—that is, able to report rather than parrot. Of course, the most important question really is “how many scientific papers do you read in an average week?”