https://journals.biologists.com/jcs/article/137/7/jcs262126/346491/An-interview-with-Sandra-Schmid-Chief-Scientifichttps://www.gen.cam.ac.uk/directory/steve-russell
This is the fourth part of a series on the crisis in academic research and publishing. Read the first three parts here, here and here.
By Vince Bielski
The driving ethos of academia, “publish or perish,” is fighting for its life.
The requirement that scholars constantly publish or face academic ruin has been considered the primary engine of scientific discovery for decades. But a growing movement of universities and researchers is trying to banish the practice to the archives, saying it has perverted the pursuit of knowledge and eroded the public’s trust in science.
Reformers at top universities in Europe and the U.S., including Cambridge, Sorbonne, and UC Berkeley, say this traditional system of advancement has led to an explosion in the growth of low-quality research, with little meaningful impact on academic fields or society. It has also sparked the spread of fraudulent research, as “paper mills” churn out fake articles for sale to academics seeking to pad their CVs.
To weaken the “publish or perish” stranglehold on universities, hundreds of research institutions are reforming the incentive system that shapes academic careers. It currently rewards scholars for frequently winning grants and publishing papers, with extra points for landing in the most esteemed, high-impact journals, even when the articles are not themselves influential.
The new incentives vary at different universities and research centers, but tend to focus on the actual quality of the research rather than the quantity or the prestige of the journals. The research’s influence on academic fields and, when appropriate, on society and public policy, is also often rewarded. So is a commitment to share papers and data as widely and freely as possible with the public. The goal is to break science out of its self-serving and insular bubble and better connect the enterprise with the public that funds it.

DORA Co-chair and journal editor Ginny Barbour is helping lead the effort to dethrone prestigious journals and value research for its contributions to science and society.
“The incentives dictate how people behave, and we have a long tradition of rewarding publications in high impact journals,” said Ginny Barbour, a Cambridge-trained physician, medical journal editor, and co-chair of the Declaration on Research Assessment, or DORA. “If we don’t get research assessment right, then the whole foundation of academic life is undermined.”
The growing movement to revamp the rewards that set the direction of science is spearheaded in the United States by DORA, which is also the name of its declaration of principles. The declaration has gathered 3,500 signatures of support from organizations, and 23,000 from individuals, since its founding in 2012 at a cell biology conference in San Francisco.
In Europe, the Coalition for Advancing Research Assessment (CoARA) operates much like DORA. Some 700 organizations and 450 higher education institutions – and 27% of all Ph.D. awarding universities in Europe – have committed to its principles since 2022, according to a study co-authored by Alex Rushforth at Leiden University in Germany.
“In Europe we have made a lot of progress in a short period of time. CoARA has been a very impressive catalyst,” said Rushforth. “Of course, signing CoARA is one thing and implementing culture change is quite another thing.”
Overturning an Entrenched Culture
Despite these significant inroads, reformers of the deeply embedded “publish or perish” culture face a huge challenge. High-ranking university officials, from presidents on down who would have to approve a new reward system, have greatly benefited from the current one. “This system worked for me so why wouldn’t it work for anyone else?” said Professor Mike Dougherty, explaining the thinking of those who questioned the assessment reforms he eventually won at the University of Maryland.
Defenders of the status quo, which is intended to bring out the best in scientists, can also point to notable progress as recently as the last decade, especially the dazzling breakthroughs in healthcare. A system that pushes researchers to aim high can get impressive results.
The iconic journal Nature is among the most influential drivers of this culture. The Nature family of journals is near the top of the publishing pyramid largely because of their “Journal Impact Factor,” or JIF, scores. JIF is a beauty contest based on the number of citations the articles in the journal receive. The more citations, the higher the JIF, and the greater the journal’s esteem. Publishing in journals with high JIF scores can make a career.
The Nature family is highly selective, attracting more than 50,000 scholarly submissions a year and publishing less than 10% of them. Nature’s tendency to report on major advances in many fields, famously illustrated by the Watson and Crick paper on DNA structure, has helped give the 157-year-old journal its magisterial reputation.
But the fact that prestigious journals publish important articles doesn’t mean everything they run is noteworthy. Studies show that a journal’s impact factor is often determined by a small number of influential articles that receive a lot of citations, reflecting glory on many less influential papers that are not cited much. In other words, many marginal papers make the cut. It’s as if Aaron Judge’s Yankees teammates got credit for his home runs.
JIF is also easy to manipulate: Authors are sometimes encouraged to include citations to articles in the same journal that they are publishing in to raise the JIF score. Even the publisher of Nature warns research institutions not to place too much emphasis on its own JIF, and DORA says the metric should be completely ignored.

As he was trying to convince his faculty in Maryland’s psychology department to support reforms that would sideline JIF, dean Doughtery examined 45,000 papers in a couple of hundred journals to determine if the journal metric and citation counts were indicators of research quality, based on factors like statistical errors and the strength of evidence. “What we found is that there is no evidence to support the claim that higher-impact journals publish higher quality research,” Dougherty said.
The need to be published in prestigious journals leads some scholars to shape their research to fit what they believe will be accepted. That means researchers take fewer experimental risks, jump on popular trends, and shelve negative findings that are very important to report, creating what DORA’s Barbour calls a “gap in the literature.”
The quest for glamour publications also delays by years the dissemination of knowledge and the possibility of breakthroughs, said Professor Steve Russell, who led the implementation of assessment reforms at Cambridge. “Young researchers in particular start at the highest impact factor journal, go through peer-review, get rejected, and then work their way down to the next highest impact factor, and on and on,” said Russell, who has bylines in both Nature and Science. “It’s a complete and utter waste of time.”
When researchers can’t clear this high bar, the fallback option is to maximize the quantity of papers to list on their CVs. This is enabled by what Dougherty calls “salami slicing” the data. Rather than producing the most substantial paper possible, scholars divide their experimental data into several small slices, allowing them to generate more papers that contribute little to science but add to the flood of publications that’s making quality control and fraud detection through the peer-review process almost impossible.
The publication of fraudulent articles full of fake data is growing at a faster rate than legitimate papers, according to a 2025 study, threatening the legitimacy of the scientific enterprise. “We know that the incentives for people to publish in high impact journals skews behavior,” said Barbour. “And at its worst, it skews behavior towards the fabrication and falsification of research, and that’s highly problematic.”
U.K. Funders Push Reform
Most universities in the U.K., which gave birth to the first academic journal in 1665, have embraced research reform, either in word or deed, along with many in the Netherlands, Norway, and Finland. Pressure from funders looking for more research with greater societal impact is one reason why.
UK Research and Innovation, the largest government funder of research and a signatory to DORA in 2019, runs a program that annually assesses universities’ research contributions to academic disciplines and society. It then divides £2 billion in grants based on those scores. The Wellcome Trust, which is the other major source of grants, restricts them to researchers at institutions that have reformed assessment practices, aiming to produce a bigger impact on people’s health and well-being from the billions of pounds it provides.

“Funding pressure was initially the driving factor. When the people giving money say this is what we expect, change happens very quickly,” said Cambridge’s Russell. “But there was also a group of academics who were very vocal that we needed to change the way that we assess researchers.”
At Imperial College London, a tragedy added to the impetus for reform. In 2014, Professor Stefan Grimm took his own life as he was struggling to win grants and publish papers needed to succeed in the faculty of medicine. It went as far as to list the high-impact journals that mattered most.
“People were shocked but not necessarily surprised,” said Stephen Curry, an emeritus professor of biology at Imperial who helped push through reforms.
The suicide catalyzed a review of assessment practices that, with the strong support of the vice provost of research, led Imperial to sign DORA in 2017 over the opposition of some engineering faculty. The changes discourage the consideration of metrics like JIF in hiring and promotion while placing greater emphasis on the quality of teaching and the impact of research.
Curry said mandates from the top don’t quickly erase ingrained habits. But in 2023, Curry sat in on dozens of recruitment and promotion interviews in different faculty groups and was impressed with what he observed.
“There has been a shift away from dwelling overmuch on numbers and journal impact factors,” he said. “These things haven’t gone away and people certainly still feel that heat of competition, but I think it is more evident now that the quality of one’s work, as well as one’s wider contributions to the university and to society, are more important than they were.”
U.S. Universities Slower to Change
While the U.K. is a success story for reformers, they have yet to deeply penetrate the biggest research system of all – the U.S. – where only a handful of major research institutions have joined the movement. Unlike in Europe, U.S. universities don’t face federal funding pressure from above to transform how they reward scientists. Under the Trump administration, federal agencies are mainly focused on ending what they deem, sometimes wrongly, as DEI-related research, and reducing overhead fees that add up to 70% to the cost of research grants.

Leiden University researcher Alex Rushforth says research is needed to see if the wave of assessment reforms is improving the quality of scholarship as promised.
Reformers in the U.S. also face resistance from below. University faculty wield much more power over academic affairs than their peers in Europe, where administrators are more likely to make the rules. Some U.S. scholars don’t see the case for abandoning long-standing reputational metrics, according to a survey by Leiden’s Rushforth. Even if JIF and the number of citations a paper receives aren’t perfect proxies for quality, survey respondents said, they offer a practical way for busy academics on hiring committees to efficiently evaluate a long list of candidates.
Maryland’s Dougherty says university departments are also wary of being on the cutting edge of reform in case it doesn’t work out. “A lot of the resistance comes back to people saying ‘We won’t do it until other universities do it, or until other people within our discipline are doing it,’” he said.
Even academics like Mark Hanson, who is critical of the “publish or perish” culture and has published papers about the misconduct it breeds, see some downsides to assessment reform. The University of Exeter professor’s fundamental research has overturned assumptions about genes and disease resistance, opening the door to rethinking therapeutic designs. Hanson is concerned that the reform movement’s emphasis on research that’s tied to practical problems will further diminish fundamental research that generates the new ideas that science needs to advance.
“With increasing pushes to fund only directly-applicable or policy-impacting research, we’re stuck in our current state of knowledge and we just iterate and explore its crevices endlessly,” said Hanson.
Reform in the U.S. has been mostly left to lone scientists with a passion for the cause. After Sandra Schmid enacted assessment changes to focus on research quality rather than metrics like JIF at the University of Texas Southwestern Medical Center, she became chief scientific officer in 2020 at Biohub, founded by Mark Zuckerberg and his wife, Priscilla Chan. One year later, Biohub, which creates AI tools for biological research, signed DORA.
Another research group, the Howard Hughes Medical Institute, signed DORA in 2013. In its competitions among researchers for employment and grants, and in evaluations for continued support, journal names are removed from applications, turning the focus on the quality of scholarship, not whether it was published in a prestigious journal.
Nonprofits like the Pew Charitable Trusts are also joining the movement. Pew is working with a group of philanthropic and public funders who want their grants to produce a bigger impact in healthcare, education, and other areas. To engage researchers in the effort, Pew has convened a group of 18 university leaders, including those at Brown, Duke, and UC Berkeley, who are redesigning their reward systems to encourage the public interest research that the funders seek.
At Maryland, Dougherty almost single-handedly championed the reforms. It took five years of meetings, his aforementioned study, and two rounds of assessment guideline revisions before Dougherty finally won the unanimous approval of his 27 faculty members. The new assessment practices, implemented in 2022, focus on measures of quality, such as the reproducibility of research, making papers and data widely accessible, and their impact on academic fields and, when applicable, on public policy.
Dougherty says so far, so good. Some faculty are more motivated to pursue important questions and take risks they would have avoided earlier because high-impact journals may not be interested in their work.
But is the overall quality of research improving in departments that have established new incentives? It’s the ultimate goal. But so far, no one has tried to answer this important question of whether the hard work of changing the culture of academia is producing better research, leaving a gap in understanding that needs to be filled, said Leiden’s Rushforth.
“We should be collecting data and testing our hypotheses and not taking for granted that if you change the incentives, you get a different type of academic research,” Rushforth said. “There should be some sort of accountability.”