Negative results are commonly assumed to attract fewer readers and citations, which would explain why journals in most disciplines tend to publish too many positive and statistically significant findings. This study verified this assumption by counting the citation frequencies of papers that, having declared to "test" a hypothesis, reported a "positive" (full or partial) or a "negative" (null or negative) support. Controlling for various confounders, positive results were cited on average 32 % more often. The citation advantage, however, was unequally distributed across disciplines (classified as in the Essential Science Indicators database). Using Space Science as the reference category, the citation differential was positive and formally statistically significant only in Neuroscience & Behaviour, Molecular Biology & Genetics, Clinical Medicine, and Plant and Animal Science. Overall, the effect was significantly higher amongst applied disciplines, and in the biological compared to the physical and the social sciences. The citation differential was not a significant predictor of the actual frequency of positive results amongst the 20 broad disciplines considered. Although future studies should attempt more fine-grained assessments, these results suggest that publication bias may have different causes and require different solutions depending on the field considered.
- research evaluation