An experienced angler knows the best fishing starts early in the morning. Every weekday at daybreak, as I sift through a fresh batch of email, a figurative fishing expedition begins. And I’m the catch. The trawlers are predatory scientific journals casting a wide net, trying to reel me in. As an academic physician and clinical scientist, who also sees patients, these publishers know I’m encouraged to advance the scientific frontiers of my field. And they are aware that a primary metric of success is how often my research is published. So, the lure of predatory publishers, whose sole motive in getting researchers’ work distributed is financial profit, can be tempting bait for some seeking prominence in the scientific community. But the consequences can be harmful and widespread beyond the science professions.
Science plays an essential role in finding solutions to humankind’s problems. While science has its limitations, the public looks to it as the primary wellspring of solutions to the problems posed by nature (i.e., securing food, clothing, shelter and health). Throughout history, the triumphs of science, particularly the ways in which scientific advances take shape in the forms of technology, have been stunning. Consider the new anti-obesity medications that are frequently advertised directly to consumers on television. This new class of medicines, a form of hormonal manipulation, has revolutionized the treatment of obesity. The potential of science seems almost limitless.
But serious trouble lurks in the hallowed halls of science. The work of contemporary scientists is sometimes blemished by irreproducible studies, financially conflicted research and outright fraud. Scandalous instances of bad science — often spread through the proliferation of predatory journals — are surprisingly common.
Operating under the aphorisms “publish or perish” and “funding or famine,” academics are not incentivized to be correct, but rather to be productive.
When corrupt or sloppily executed science comes to light, public confidence in the scientific community wanes, or worse, a skepticism emerges within certain segments of the population that threatens to undermine the advances we’ve made. A 2024 Pew Research Center survey found that Americans’ trust in science has recovered slightly after a steady decline since the pandemic in 2020. Nonetheless, nearly one-quarter of U.S. adults express “not too much” confidence or “none at all” in scientists.
Some alarmists worry that without substantial reform, a tipping point could eventually come wherein “bad science” becomes so pervasive and politically polarized that the public loss of trust in the scientific enterprise leads to a “new dark age” of sorts. That the term “post-truth,” a neologism referring to concerns over public truth claims, was ignominiously introduced into the Oxford English Dictionary as the 2016 word of the year may be a harbinger of the danger.
Addressing the problem of bad science and its potential harm to global health and well-being will require change on both sides of the scientific-public divide. The scientific community must better identify and root out fraudulent and poorly done science and clearly communicate their findings to the public. That will better equip the public to identify, understand and digest the scientific concepts embodied in technological advances.
Building such understanding and trust will necessitate dialogue between scientists and the public, marked by humility and transparency from the scientific community. At the ground level, scientists like me will need to resist the lure of predatory publishers and similar shameful practices. Without reform, fake science could wreak as much societal havoc down the road as fake news does now.
The wreckage of bad science
The negative consequences of corrupt or careless science are immense and incalculable. Like an automotive junkyard on the outskirts of town, the wreckage of bad science just keeps piling up.
A survey of the destruction suggests the most harmful outcome of bad science is the public’s loss of confidence. When suspicion emerges in the minds of everyday citizens that the country’s scientific apparatus and processes can’t always be trusted, scientifically-based policy proves more difficult to promote. Moreover, an environment of distrust provides fertile ground for conspiracy theorists to exploit the uncertainty in the public mind to achieve their political aims.
A notorious example of this phenomenon is popularly known as Climategate. The 2009 controversy involved hacked emails from a prominent climate research group at the University of East Anglia in the United Kingdom that seemed to suggest that the scientists had colluded to exaggerate the potential effects of climate change and suppress dissenting views when they engaged in peer review of others’ work. Despite several formal investigations that concluded there was no scientific misconduct, the controversy spawned anti-climate change conspiracy theories that still have an impact today.
The story of vaccination’s purported link to autism is perhaps the most widely known example of how bad science can have a severe adverse societal impact over time. The saga began in 1998 when an influential study by Dr. Andrew Wakefield published in The Lancet, one of the world’s top medical journals, suggested that the measles-mumps-rubella (MMR) vaccine may cause autism. The study appeared just before the World Health Organization began declaring that measles had been eradicated in many countries because of high levels of vaccination. The Wakefield study attracted a great deal of attention in both the medical and lay press and fueled a global anti-vaccine movement. Concerns among parents over the rising incidence of autism diagnoses presumably drove the intense interest in the study.
When corrupt or sloppily executed science comes to light, public confidence in the scientific community wanes, or worse, a skepticism emerges that threatens to undermine the advances we’ve made.
Upon further review, critics identified serious problems with Wakefield’s study, including a very small sample size (just 12 patients) and other major methodological concerns such as lack of a proper control group and selective reporting of data. Earthshaking implications like those stemming from Wakefield’s work should be supported with stronger evidence that would typically come in the form of a randomized, blinded, controlled study with hundreds more patients.
Investigators also found that Wakefield had a serious financial conflict: He was a paid consultant to lawyers suing vaccine manufacturers. The consulting began before the controversial study was published and Wakefield reportedly received over 400,000 pounds (more than $500,000) that was not disclosed.
More than a decade after the original article appeared, The Lancet officially issued a mea culpa from the editors, stating “we fully withdraw this paper from the published record.” The U.K. medical authorities also formally disciplined Wakefield, revoking his medical license, among other sanctions.
But the wreckage from this single instance of misconduct had already piled up. Even though the Wakefield study has been thoroughly debunked and officially retracted, the conspiratorial thinking about vaccines and the link to autism still lingers worldwide. Governments and universities have spent tens of millions of dollars on studies to counter the prevailing conventional anti-vaccination wisdom that circulated widely. For example, in one landmark study involving 650,000 kids, Danish investigators studied the incidence of an autism diagnosis in vaccinated versus unvaccinated children; they found no difference.
More tragically, presumably in part because of the relentless media coverage, MMR vaccination rates dropped below rates necessary to achieve herd immunity in many countries. An outbreak in Texas that spread to other states beginning this year has so far affected more than 500 children. Two children have died from the disease. Tragically, in 2023 (the last year with complete reporting), over 100,000 measles-related deaths occurred around the world — all of them likely preventable with vaccination. As one academic physician bluntly summarized in the aftermath of the decades-long controversy: “Measles does not cause autism in children. But it kills them.”
In retrospect, the changing media landscape certainly has some culpability in this vaccination fiasco. Some pundits noted that the Wakefield study resulted in “science by press conference” because of the way the traditional press at the time sensationalized the story. In today’s digital environment, studies analyzing social media content regarding vaccines confirm that a substantial proportion of the online information promotes what has come to be called a “vaccination hesitancy” stance, particularly on websites focused on alternative medicine.
Social media platforms that enable anyone, including the misinformed or the politically motivated, to become an independent publisher with worldwide reach can be especially damaging. Cleverly crafted TikTok spots, Instagram videos and podcasts reaching millions of people can effectively snuff out the voices of legitimate experts delivered through less popular means, such as seldom publicized medical journals or professional conferences. With the recent confirmation of Robert F. Kennedy Jr. as the U.S. Health and Human Services secretary, Americans might reasonably wonder what information (or misinformation) influenced his uncertain stance on vaccination.
A taxonomy of bad-science
The scientific community has not yet reached a consensus on categorizing unscrupulous or substandard science like Wakefield’s work, but many professional organizations have called attention to specific kinds of trouble. No official taxonomy exists, but it’s clear that bad science appears in many forms.
Overtly fraudulent science is perhaps the most egregious offender because it involves works that are sometimes pulled entirely out of thin air and even offered up for sale in order to advance a researcher’s career. A variation on this theme is the publication of studies in which the data are “massaged” to create a more impressive result. This approach can include, for example, the deceitful statistical manipulation of the data or the dishonest enhancement of study images and charts.
The taxonomy of bad science also includes “conflicted science,” where a scientist has a financial interest in the results of their research or in the content of their presentations made at medical conferences; that is, their bank account balance can get a boost if their research or presentations support favorable claims about certain drugs or medical devices.
A tipping point could eventually come wherein “bad science” becomes so pervasive and politically polarized that the public loss of trust in the scientific enterprise leads to a “new dark age” of sorts.
Consider the case of Dr. Charles Nemeroff of Emory University, an internationally prominent psychiatrist. Over the course of about seven years, Nemeroff’s consulting fees from drugmakers totaled nearly $3 million; at the same time, he conducted federally funded research related to some of the same pharmaceutical companies’ products. Nemeroff apparently failed to disclose a substantial portion of this income to Emory or the National Institutes of Health, the research funding agency.
Concern over physicians from leading institutions engaging in similar conduct prompted a congressional investigation and eventually culminated in passage of the “Sunshine Act” in 2010, which requires companies to disclose these business relationships. Scientists and company executives are presumably more reluctant to get involved in these dubious financial arrangements when they know their mothers might read about them in the papers. By rule, I disclose my financial conflicts before every professional presentation so that the audience can judge for themselves.
Second only to overtly fraudulent research, irreproducible science is perhaps the category of bad science that is most concerning to scientists themselves. One of the cornerstones in the philosophy of science is that for scientific results to be reliable, they must be reproducible. But a perusal of the scientific literature reveals a great deal of discussion about a “reproducibility crisis,” a recognition that much of what is published, perhaps 50 percent or more, cannot be replicated by other investigators.
Dr. John Ioannidis, a Stanford epidemiologist who is among the world’s foremost authorities on the reproducibility crisis, calls it, in a frightening string of alliteration, the Medical Misinformation Mess. The title of his classic article on the topic is sobering: “Why Most Published Research Findings Are False.”
The methods in the madness
Learning of the prevalence of science gone bad, a concerned public might reasonably ask how this happens. What combinations of unethical or slothful machinations could possibly give rise to such a predicament? Predatory publishers, the fishermen I referred to earlier, are one of the main culprits in the process of disseminating bad results. These publishers prey on scientists, particularly in the academic world, who desperately need evidence of research productivity to be promoted up the academic ladder. Operating under the aphorisms “publish or perish” and “funding or famine,” academics are not incentivized to be correct, but rather to be productive.
Anonymous surveys of academics indicate that the temptation to engage in deceptive practices, presumably as a means of increasing one’s scholarly productivity, is powerful. On average, about 2 percent of research scientists anonymously admit to having fraudulently altered study results; substantially more (about one-third) fess up to observing unethical behavior in a colleague.
Predatory publishing is typically defined as an abusive “open access” model that charges fees for getting an article published, usually in an “online only” journal, under the pretense of providing genuine editorial services. Many of these publishers are a complete sham (e.g., publicize a fake address, etc.); others are more akin to incompetent amateurs for whom profit is likely the main goal, even though they may occasionally publish decent work. Astonishingly, there are cases of “hijacked journals” where the fraudsters clone a counterfeit version of a reputable journal’s website in order to con the authors out of publication fees.
Ernest Rutherford, a pioneer in nuclear physics, famously quipped: “A theory that you can’t explain to a bartender is probably no damn good.” We need more capable spokespeople who can explain science to bartenders and the rest of us.
These “pay to publish” scams typically charge as much as $1,000 to publish an article. In my field of anesthesiology, intensive care and pain medicine, a recent study revealed over 200 different journals associated with over 80 different predatory publishers that have published almost 13,000 articles since 2008. Considering that anesthesiology is just a small part of medicine, this 13,000 is likely just the tip of the iceberg.
Another form of predatory publishing are so-called “paper mills,” which are a whole new level of scientific fraud. These outfits work in the shadows to produce fake research, often with the aid of artificial intelligence programs, and sell authorship to academics desperate to get something published that passes for real research. Because many of these mills operate in China, they have had a particularly damaging effect on the reputations of Chinese scholars. The shocking scale of the problem far surpasses what could be simply written off as a few bad apples.
A surprising feature of these academic marauders is their astonishingly effective marketing. These fishermen are skilled in crafting their lures. Over the course of a single year, a physician or scientist might receive nearly 1,000 unsolicited emails, many from suspect journals, inviting them to submit a scientific report of some kind.
I typically receive two or three of these each weekday morning, usually beginning with an odd-sounding salutation, like “Dear Professor, greetings for the day!” I have even received unsolicited invitations to become an editor-in-chief of an obscure journal, always one I have never heard of (or may not yet exist). Using a truckload of flattery, the conspirators presumably seek to leverage an unsuspecting scientist’s reputation for their profit. For academics trying to bolster their reputations, the flattery can be persuasive. This scheme works sometimes because many scientists don’t know they are targets of these unscrupulous entities that could do more harm than help to the reputation of an unsuspecting researcher.
Hope for a scientific cure
As the pile of bad science is mounting, the scientific community is mobilizing impressively to clean it up. A brief survey of the effort illustrates the sophistication of the techniques employed.
Various organizations are developing tools, including software and data sharing channels, to help detect fake science. For example, the Scientific, Technical, and Medical Publishers (STM) group, an international association of publishers representing many of the largest in operation, have pooled their resources to combat the bad science problem. Their software can spot plagiarism, AI-generated text and mendaciously manipulated images. STM also offers “Master Classes” in scientific integrity where they share their expertise with publishers large and small, promoting reform by running an open access shop to share best practices.
One of the oldest and best-known scientific integrity operations is Retraction Watch (RetractionWatch.com), founded in 2010. This blog catalogs official scientific article retractions and curates them in helpful ways. The creators of the site point out that peer review of science extends well beyond the publication date; the discovery of serious problems in some works is often delayed until scientific integrity gumshoes uncover it years later. A visit to the RetractionWatch.com webpage reveals the scale of the problem; more than 50,000 retracted studies are in their database! These retractions sometimes involve papers in the most prestigious journals and even papers authored by Nobel Prize winners. Appallingly, about two-thirds of retractions are thought to result from scientific misconduct, not honest mistakes.
As one academic physician bluntly summarized in the aftermath of the decades-long controversy: “Measles does not cause autism in children. But it kills them.”
Reputable journals can assist the reform efforts by focusing on the reproducibility crisis, insisting that scientists provide sufficient detail in their publications so that other scientists can reproduce their results if need be. Many journals now require that authors engage with a “reproducibility checklist” that requires the submission of raw data, programming code and experimental protocols when requested. Similarly, publishers and funding agencies should willingly support replication attempts of certain critical studies even though the results are already published once.
AI will certainly play an indispensable role in the reform effort. AI-augmented analysis is already being used with great success, in part because the overwhelming volume of scientific literature mandates that techniques to identify fraudulent activity must be at least partially automated. AI can identify fraudulent images, a daunting task that even highly trained human eyes cannot perform reliably.
Ironically, AI-boosted software is also employed to detect AI-generated text in fraudulent manuscripts, often by spotting what are sometimes called “tortured phrases.” An AI-concocted manuscript might produce the term “bosom disease” for breast disease, or “lactose bigotry” for lactose intolerance, published in predatory journals that might not undergo any real review to catch those odd phrases.
Academic institutions can also join in the reform effort by lowering the pressure on professors to publish so often. As a young academic, I learned the conventional wisdom that one’s productivity can be enhanced by identifying the “least publishable unit” (i.e., chopping up a study into smaller pieces), thereby increasing the overall number of papers. Academia can address this problem by changing the way they evaluate a faculty member’s work. A promising development in this arena is the Declaration on Research Assessment (DORA), a document that encourages universities to focus on the quality rather than the quantity of a professor’s work when considering the granting of promotions and tenure; worldwide, more than 2,500 institutions have signed onto DORA.
Even the average person on the street has some responsibility to improve the work of science. A 2016 comprehensive report by the National Academy of Sciences provided strong evidence that increased individual scientific literacy benefits society as a whole by improving the public’s ability to evaluate and understand scientific advances and the corresponding policy implications.
Along these lines, we need more articulate spokespeople who can explain the advances of science to the common person. As some experts have opined, for scientific facts to be robust and accepted by the masses, they must be supported by trusted institutions and a reliable media apparatus. Accordingly, institutions and media outlets must seek out people with the talent to vet and articulate complex scientific concepts to everyday people. Gifted communicators like Malcolm Gladwell and Carl Sagan are two famous examples, but many high school science teachers exhibit similar talents in their pedagogy.
The popularity of books by people like Gladwell suggests a strong market for science to be explained in simple, straightforward language. Ernest Rutherford, a pioneer in nuclear physics, famously quipped: “A theory that you can’t explain to a bartender is probably no damn good.” We need more capable spokespeople who can explain science to bartenders and the rest of us.
Finally, humility will play a key role in such reform efforts. Most levelheaded people understand that science is imperfect, that scientists make mistakes. I’ve certainly made and corrected mistakes in my studies along the way. Where scientists are concerned, what infuriates the public is an elitist, “we-are-the-smart-ones” posture. As the scientific community’s failings amply illustrate, science needs enough humility to admit when it doesn’t know things; engaging honestly with the public works best. Similarly, the public ought to show the scientific establishment the respect it deserves, exhibiting patience and understanding as science works to clarify, sometimes in fits and starts, what is demonstrably true.
There is reason for hope. Science is self-correcting and inexorably marches toward the truth. I esteem the overwhelming majority of my fellow scientists as truth seekers who are dedicated to applying science for the good of humanity. Increasingly aware of the threat, the scientific community is mounting an organized, credible response to the onslaught of bad science. And society, recognizing the importance of science in the world, is certainly rooting for the scientific community to succeed.
This story appears in the May 2025 issue of Deseret Magazine. Learn more about how to subscribe.