Let’s Talk About Failures (in Research)

Afternoon research presentations at AGU, December 2017

by Caitlyn Hall

Afternoon research presentations at AGU, December 2017

In December 2017, I had the opportunity to travel to New Orleans for the American Geophysical Union (AGU) Fall Meeting.  I anticipated presenting my research and meeting top researchers in the geosciences field, but my biggest takeaway was tangential.  I met several active graduate students, research scientists, and professors while at AGU and we lamented over “failed” experiments, only to hear about how someone else found a solution or that 10 people were also have the exact same problem.  We wondered where these efforts were published and how could we better share our learning curves with other scientists.  While bonding over our mutual tendencies to delve into side projects and frustration with current publishing culture of “publish or perish”.   Tim van Emmerik and I decided to harness our momentum and do something about it.  We weren’t the only ones looking to tackle this issue and we were able to get Andrea Popp and Hannes Müller on board.  We took the steps forward to creating our own peer-reviewed collection on negative and unexpected results and I wanted to share how we are trying to start the conversation with the Swette Center.  The following was originally published on the Young Hydrologic Society page, but the need to talk more about what doesn’t work (as well as what does) goes beyond the hydrology field.

International active and involved early career scientists post-research presentation in the French Quarter, New Orleans. Photo Credit: Tim van Emmerik (@TimvanEmmerik)

YHS Collection on “Unexpected Results in Hydrology”

Caitlyn A. Hall, Andrea Popp, Hannes Müller, and Tim van Emmerik

Understanding and learning from unexpected results is a fundamental element of science. Different names exist for these results, e.g., failures, obstacles, or unexpected results.  Although all of these names sound unexpected, they are important for the understanding of processes, developing and testing of theories, and identifying pitfalls and possible dead-ends in science.

By carefully designing and conducting experiments with some level of trial-and-error, researchers eventually find results that will be published in peer-reviewed scientific journals.  Paradoxically, we typically only publish the successful tests and their results.  What comes of the weeks to months of critical information that led to this successful experiment?  It usually remains in the dark.  However, not sharing unsuccessful iterations or unexpected results — defined here as experiments that do not adequately confirm an accepted hypothesis, despite sound and careful experimental design, planning, and execution — along the way prevents others to learn from these endeavors (Nature Editorial, 2017).

In the past, many philosophers, including Popper (1963) and Chalmers (1973), have emphasized that science can only advance by learning from mistakes.  Moreover, recent literature in various fields elaborate on the many benefits and values of publishing unexpected results and call upon the scientific community to nurture their dissemination (e.g., Andréassian, et al., 2010; Schooler, 2011; Matosin et al., 2014; Granqvist, 2015; Goodchild van Hinten, 2015; Boorman, et al., 2015; PLOS collections; 2015, 2017; Nature Editorial, 2017).  Despite the various calls to report such results and the frequency they occur in the lab space, they are still underrepresented in most fields of our current publication system.  The reasons can be manifold such as, a lack of incentive (no scientific reward) or the fear of a negative reputational impact.

So, why should you report your failed approaches and unexpected findings?

By reporting on unexpected findings, we can do the following:

  • Decrease the currently existing publication bias towards positive results
  • Save time and resources of other scientists exploring same/similar hypotheses and/or approaches
  • Increase transparency and reproducibility of our studies
  • Share all findings of publicly funded projects

How and where can you share your unexpected findings?

You can share your unexpected results at:

  • Special journal issues
  • Dedicated sessions at conferences
  • Platforms (e.g., Researchgate)
  • Supplementary material of your paper
  • Blog posts

We aim to stimulate this discussion via the new Young Hydrologic Society collection “Unexpected Results in Hydrology”.  We want to instill a positive perception to change the way in which the scientific hydrologic community value unexpected and negative results including individual researchers, scientific societies, funding agencies, and publishers.  Therefore, we invite researchers to report their negative and unexpected results, such that we are able to holistically advance science – by sharing our failures, not only our successes.

Reporting on such findings should include the following components in a maximum of 2,000 words: 1) an original research objective and expected results, 2) a brief summary of experimental design and methods, 3) discussion on the experimental results and challenges, including images and/or figures, and possibly 4) lessons learned and the path forward.

After a peer-review done by the editors of this collection, the post will get a DOI and will be visible on the YHS website and on a dedicated ResearchGate project site. On ResearchGate we invite discussions on published submissions such that the authors can receive feedback to facilitate new insights from the scientific community. Upon enhancing their previous analysis or coming to new conclusions, we welcome resubmissions by the original authors.

 

References

Andréassian, V., Perrin, C., Parent, E. and Bárdossy, A. (2010). Editorial – The Court of Miracles of Hydrology: can failure stories contribute to hydrological science? Hydrol. Sci. J. 55(6), 849–856.

Boorman, G.A., Foster, J.R., Laast, V.A. and Francke, S. (2015). Regulatory Forum Opinion Piece: The Value of Publishing Negative Scientific Study Data, Toxicol Pathol, 43(7), 901-906. doi: 10.1177/0192623315595884

Chalmers, A.F. (1973). On Learning from Our Mistakes, The British Journal for the Philosophy of Science (Oxford University Press) 24(2), 164-173.

Goodchild van Hilten, L. (2015).Why it’s time to publish research “failures”-Publishing bias favors positive results; now there’s a movement to change that. Elsevier Connect.

Granqvist, E. (2015). Why science needs to publish negative results. Elsevier Connect

Matosin, N., E. Frank , M. Engel, J.S. Lum, and Newell, K.A. (2014). Negativity towards negative results: a discussion of the disconnect between scientific worth and scientific culture. Disease Models and Mechanisms. 7(2): 171–173. doi: 10.1242/dmm.015123

Nature Editorial (2017). Nurture negatives, Nature 551, 414, https://www.nature.com/magazine-assets/d41586-017-07325-2/d41586-017-07325-2.pdf accessed Dec. 21, 2017, doi: 10.1038/d41586-017-07325-2

PLOS Collections (2015). Positively Negative: A New PLOS ONE Collection focusing on Negative, Null and Inconclusive Results, PLOS ONE Community Blog

PLOS Collections (2017). Negative Results: A Crucial Piece of the Scientific Puzzle, PLOS ONE Community Blog

Popper, K. (1963). Conjectures and Refutations: The Growth of Scientific Knowledge, Routledge Classics, London and New York

Schooler, J. (2011). Unpublished results hide the decline effect, Nature 470, 437, doi:10.1038/470437a

 

 Caitlyn Hall is a 3rd-year Environmental Engineering PhD student in the Ira A. Fulton Schools of Engineering at ASU.  Her primary research focus is mathematical modeling and treatment fate and transport towards upscaling microbially induced calcite precipitation and soil desaturation via denitrification as a ground improvement process for mitigating earthquake-induced liquefaction.  She has also studied oxidation of total petroleum hydrocarbons in crude oil towards remediation of oil-contaminated sites.