As It Happens

New academic journal only publishes 'unsurprising' research rejected by others

The editor behind a new academic journal that only publishes unsurprising research hopes the journal will help fix a major problem in scientific research — a bias towards surprising, unexpected results.

Researchers at the University of Canterbury hope the journal will help fight 'a publication bias'

(Chaloner Woods/Getty Images)

Transcript

There's a method to Andrea Menclova's mundane madness.

Menclova is an associate professor at the University of Canterbury and the editor of a new journal called The Series of Unsurprising Results in Economics [SURE].

The mission is simple: only publish research with findings that are boring.

As It Happens host Carol Off spoke to Menclova about the journal and why she hopes it will address a bias in academic research publications.

Here is part of their conversation.

Andrea, why on Earth would you want to put out a journal full of dull research?

Well, I think science has a problem and it's a problem with publication bias.

We believe that a lot of the other journals are biased in the other direction — towards publishing attractive, catchy, strong, statistically significant results.

We kind of want to fill the void and publish results that are the opposite of that —unsurprising, weaker, statistically insignificant, not conclusive and so on.

The SURE journal aims to address a bias in academic research publications. (Shutterstock/Lena Lir)

The boring research that you would have in your publication — these are results that didn't prove anything sensational, nothing sexy, nothing unusual. What happens when people do that kind of important research? Why is it so difficult to get that published?

Our careers can depend quite crucially on publishing papers and publishing them in good journals.

And so, oftentimes, after finding a research question, reading up on other people's work, thinking about methods carefully, getting data, cleaning, and so on — months into the project, people finally get results which are outside of their control or should be outside of their control.

And whoops! The results are not striking. Oftentimes people just give up at that point and they move on.

What also happens often is that people think: "Well, surely there is an attractive result hiding here somewhere. So if I just look long enough I'm going to find it."

Very likely, just by sheer coincidence, they do find a statistic that's significant and that's the one that gets published.

And so, as a result, what we see in journals is typically — it's the truth, but it's not the full truth.

A screenshot of the SURE journal website, which lists the 'unsuprising' submission requirements. (SURE Journal)

Also, as you mentioned, if getting grants and research money and jobs depends on this [and] if you're only getting published if you have sensational results then the actual science itself starts to get skewed.

Absolutely. So if you then had a policy maker who wanted to implement an intervention and wanted to have evidence-based policy, they might be finding a lot of studies that say the results are really powerful — you know, do this.

Yet for every such published study there might be another study that didn't get published and it just found no results.

Can you give examples — what would you publish as an unsurprising results in your journal?

A good example is the inaugural publication, I think, that we had last week.

This is from Nick Huntington-Klein and Andrew Gill from California State University Fullerton.

They looked at the issue of U.S. college students not finishing their degree, not graduating within four years, taking longer than that. 

That's very costly and some recent papers show that the solution to this may be very straightforward and low-cost. You just tell students. You make it explicit to them of what the costs are and how to graduate in four years.

Nick and Andrew did a similar experiment, they found no effects whatsoever.

And so they send this to a journal and the response they got was, well, you know, you probably wouldn't expect an effect because this is not fairly novel information to the students. So we are not going to publish this because it's not really surprising.

There was a story last year. A very popular food researcher at Cornell University had 15 studies retracted after it was revealed that he had fudged the results to make them more interesting. He had results that if you eat off smaller plates it would make you eat less and things like that. And so it actually influences behaviour at some point, doesn't it? These things get picked up. They get published. We cover them on programs. It all ends up being wrong, doesn't it?

And it does a lot of harm because evidence shows that even if you retract these results they kind of have already stuck in people's minds and people don't go back and check what has been retracted. So it can do lasting harm.

Same thing happens though in journalism. For instance, if there are 1,000 scientists who say there is climate change and there's one scientist who's makes a controversial claim that there isn't climate change then that scientist will actually get exposure. It skews public perception, doesn't it? So I guess we're responsible for the same thing.

Yeah, and it's understandable. But you would hope that in science we are slightly removed from that and sheltered from that pressure.

Written by Allie Jaynes and John McGill. Interview produced by Allie Jaynes. Q&A edited for length and clarity.