GUEST POST: Why you should be a skeptical science consumer

GUEST POST: Why you should be a skeptical science consumer

By Tim van der Zee

Tim.jpg

In 2015 Tim started as a PhD candidate at ICLON. Earlier he obtained a Masters in Human Learning and Performance psychology (with honors) from the Erasmus University Rotterdam. He studies how people learn from educational videos in open online education, such as MOOCs (Massive Open Online Courses), and focuses specifically on increasing the instructional design quality and educational value of these videos. His position is funded by CEL (Centre of Education and Learning) in the Netherlands, a collaboration between the universities of Leiden, Delft, and Rotterdam. In addition to his research on improving the educational quality of online videos, he is a vocal advocate of pre-registration, open science, and other efforts aimed at improving the quality of the scientific literature. You can follow him on Twitter at @Research_Tim and read his blog at www.timvanderzee.com.

Tim previously contributed a guest post on guidelines for designing educational videos.

Don’t take my word for it, but being a scientist is about being a skeptic.

About not being satisfied with easy answers to hard problems.

About not believing something merely because it seems plausible…

.. nor about reading a scientific study and believing its conclusions because, again, it seems plausible. 

“In some of my darker moments, I can persuade myself that all assertions in education:

(a) derive from no evidence whatsoever (adult learning theory),

(b) proceed despite contrary evidence (learning styles, self-assessment skills), or

(c) go far beyond what evidence exists.”

– Geoff Norman

The scientific literature is biased. Positive results are published widely, while negative and null results gather dust in file drawers (1), (2). This bias functions at many levels, from which papers are submitted to which papers are published (3), (4). This is one reason why p-hacking is (consciously or unconsciously) used to game the system (5). Furthermore, researchers often give a biased interpretation of one’s own results, use causal language when this isn’t warranted, and misleadingly cite others’ results (6). For example: close to 28% of citations are faulty or misleading, which typically goes undetected as most readers do not check the references (7).

This is certainly not all. Studies that have to adhere to a pre-registered protocol, such as clinical trials, often deviate from the protocol by not reporting outcomes or silently adding new outcomes (8). Such changes are not random, but typically favor reporting positive effects and hiding negative ones (9). This is not at all unique to clinical trials; published articles in general frequently include incorrectly reported statistics, with 35% including substantial errors that directly affect the conclusions (10), (11), (12). Meta-analyses from authors with industry involvement are massively published yet fail to report caveats (13). Besides, when the original studies are of low quality, a meta-analysis will not magically fix this (aka the ‘garbage in, garbage out’ principle).

garbage-bag-1256041_1920.jpg

Image by kalhh from Pixabay

One such cause for low quality studies is the lack of control groups, or what can be even more misleading: inappropriate control groups that can incorrectly imply that placebo effects and other alternative explanations have been ruled out (14).

Note that these issues are certainly not restricted to quantitative research or (semi-)positivistic paradigms, but are just as relevant for qualitative research from a more naturalistic perspective (15), (16), (17).

All the Wrong Incentives

In the current system, unfortunately, misleading the reader with overblown results is incentivized. Partly this is due to the publication system, which strongly favors positive findings with a good story. In addition to cultural aspects, individual researchers of course also play a fundamental role. However, what makes it especially tricky is that it is also partly inherent to many fields, especially those which do not have ‘proof by technology’. For example, if you claim you can make a better smartphone, you just build it. But in fields like psychology this is rarely possible. The variables are often latent, and not directly observable. The measurements are indirect, and it is often impossible to proof what they actually measure, if anything.

Bad incentives won’t disappear overnight. People are resistant to change. While there many who actively fight to improve science, it will be a long, if not never-ending journey.

And now what…

Is this an overly cynical observation? Maybe. Either way, it is paramount that we should be cautious. We should be skeptical of what we read. What is more, we should be very skeptical about what we do, about our own research.

This is perhaps the prime reason why I started my blog, The Skeptical ScientistI am wrong most of the time. But I want to learn and be slightly less wrong over time. We need each other for that, because it is just to easy too fool oneself.

In later blog posts, I continue to focus on issues in science, but more importantly I attempt to highlight better ways to do science and share practical recommendations.

Let’s be skeptical scientists.

Let’s become better scientists.

A version of this post originally appeared on The Skeptical Scientist blog.


References:

(1) Dwan, K., Gamble, C., Williamson, P. R., & Kirkham, J. J. (2013). Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PloS One, 8.

(2) Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502-1505.

(3) Coursol, A., & Wagner, E. E. (1986). Effect of positive findings on submission and acceptance rates: A note on meta-analysis bias. Professional Psychology: Research and Practice, 17, 136-137.

(4) Kerr, S., Tolliver, J., & Petree, D. (1977). Manuscript characteristics which influence acceptance for management and social science journals. Academy of Management Journal, 20, 132-141.

(5) Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biol, 13(3).

(6) Brown, A. W., Brown, M. M. B., & Allison, D. B. (2013). Belief beyond the evidence: using the proposed effect of breakfast on obesity to show 2 practices that distort scientific evidence. The American Journal of Clinical Nutrition, 98, 1298-1308.

(7) Van der Zee, T. & Nonsense, B. S. (2016). It is easy to cite a random paper as support for anything. Journal of Misleading Citations, 33, 483-475.
http://compare-trials.org/

(8) Jones, C. W., Keil, L. G., Holland, W. C., Caughey, M. C., & Platts-Mills, T. F. (2015). Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC Medicine, 13, 1.

(9) Bakker, M., & Wicherts, J. M. (2011). The (mis) reporting of statistical results in psychology journals. Behavior Research Methods, 43, 666-678.

(10) Nuijten, M. B., Hartgerink, C. H., van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2015). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 1-22.

(11) Nonsense, B. S., & Van der Zee, T. (2015). The reported thirty-five percent is incorrect, it is approximately fifteen percent. The Journal of False Statistics, 33, 417-424.

(12) Ebrahim, S., Bance, S., Athale, A., Malachowski, C., & Ioannidis, J. P. (2015). Meta-analyses with industry involvement are massively published and report no caveats for antidepressants. Journal of Clinical Epidemiology, 70, 155-163.

(13) Boot, W. R., Simons, D. J., Stothart, C., & Stutts, C. (2013). The pervasive problem with placebos in psychology why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8, 445-454.

(14) Collier, D., & Mahoney, J. (1996). Insights and pitfalls: Selection bias in qualitative research. World Politics, 49, 56-91.

(15) Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The Qualitative Report, 8, 597-606.

(16) Sandelowski, M. (1986). The problem of rigor in qualitative research. Advances in Nursing Science, 8, 27-37.