A previously approved report on COVID-19 vaccine effectiveness was withdrawn following concerns over its analytical approach.
By Salai Kim
A federal health agency has halted the release of a closely watched COVID-19 vaccine study, despite it having already cleared scientific review, after officials raised concerns about how its findings were calculated.
The U.S. Department of Health and Human Services (HHS) confirmed Wednesday that a research paper slated for publication in the Centers for Disease Control and Prevention’s (CDC) Morbidity and Mortality Weekly Report (MMWR) was ultimately withheld. The decision followed internal disagreements over the study’s methodology, according to agency officials.
The report, which had been scheduled for publication on March 19, had already undergone peer review and received approval from MMWR editors, according to both current and former CDC staff. However, HHS spokesperson Andrew Nixon said the editorial process flagged issues related to how the study measured vaccine effectiveness, leading to its rejection.
In earlier comments to NBC News, Nixon noted that concerns were raised by the CDC’s acting director, Dr. Jay Bhattacharya, specifically about the statistical methods used in the analysis. The Washington Post first reported both the delay and eventual cancellation of the report.
According to an HHS official, the study’s authors declined to revise their methodology despite the objections. The approach in question is widely used in vaccine research and involves comparing COVID-19 test results among vaccinated and unvaccinated patients who seek care in hospitals or emergency departments.
This method has been employed in numerous peer-reviewed studies published in leading journals such as Pediatrics and the New England Journal of Medicine. Using this same framework, the now-withdrawn CDC study found that COVID-19 vaccines reduced hospital visits and emergency care among otherwise healthy adults by roughly 50% during the winter season.
HHS officials did not provide detailed reasoning for rejecting the methodology in this case but suggested that factors such as prior infections, patient behavior, and differences in healthcare-seeking patterns could skew results.
Some experts dispute those concerns. Dr. Fiona Havers, a former CDC official based in Atlanta, said the methodology is specifically designed to account for differences in healthcare access and utilization. She also noted that widespread prior infection across the U.S. population reduces the likelihood that such factors would significantly distort findings.
“No study design is flawless,” Havers said, “but there hasn’t been a realistic or ethical alternative proposed for generating timely estimates of vaccine performance.”
The decision has also revived longstanding concerns about political influence over public health communications. During the first Trump administration, critics accused political appointees of attempting to shape or suppress scientific findings published in the MMWR.
Following Donald Trump’s return to office last year, publication of the MMWR was temporarily paused before resuming in a reduced format.
“Scientific reports are routinely reviewed at multiple levels to ensure they meet the highest standards before publication,” HHS spokesperson Andrew Nixon said, adding that concerns about the study’s analytical approach ultimately led to its rejection.
“Health care professionals rely on the MMWR for timely, objective, and fact-based information about the nation’s public health,” said U.S. Sen. Dick Durbin of Illinois.
“Muzzling scientists and doctors on how to prevent Americans from being hospitalized can have deadly consequences,” Durbin added. “The CDC must abandon plans to restrict this kind of critical research.”
The halted publication underscores ongoing tensions within federal health agencies over how scientific data is evaluated and communicated. As debates over methodology and transparency continue, the decision may have broader implications for public trust in health guidance and the independence of scientific research.
This episode highlights a deeper issue than a single study—it reflects the fragile balance between scientific rigor and administrative oversight. When established research methods become points of contention, it raises questions about consistency and transparency. How these disputes are handled will likely shape confidence in future public health decisions.