A Call for Peer Re-Reviews of Articles on Covid Vaccines
Update: 2025-10-08
Description
By Eyal Shahar at Brownstone dot org.
In the years that I served as an associate editor (for The American Journal of Epidemiology), I have seen the entire spectrum of "peer reviews" - from meticulous, thoughtful critiques whose authors evidently invested several hours in the task to sketchy reviews that reflected carelessness and incompetence. I have read friendly reviews by admirers of the authors and hostile reviews by their enemies. (It is not difficult to tell from the tone.) In the practice of science, human beings still behave like human beings.
Matters got worse during the pandemic. Studies that praised the Covid vaccines were quickly certified "peer-reviewed," whereas critical, post-publication peer review was suppressed. As a result, we now have a historical collection of published poor science. It cannot be erased, but it is time to start correcting the record.
Biomedical journals are not the platform. First, there is no formal section for open peer reviews of articles that were published long ago. Second, editors have no interest in exposing falsehoods that were published in their journals. Third, the censorship machine is still in place. So far, I was able to break it only once, and it wasn't easy.
So, how can we try to correct the record, and where?
Let me make a suggestion to my colleagues in epidemiology, biostatistics, and related methodological fields who preserved their critical thinking during the pandemic. Choose one article or more about the Covid vaccines and submit your peer review to Brownstone Journal. If it is interesting and well-written, there is a good chance it will be posted. I advise cherry-picking: find those peer-reviewed articles that irritated you most, either because they were pure nonsense or because the correct inference was strikingly different. And if you posted short critiques on Twitter (now X) or thorough reviews on other platforms, expand, revise, and submit them to Brownstone. Perhaps we can slowly create an inventory of critical reviews, restoring some trust in the scientific method and in biomedical science.
Here is an example.
A Review and a Re-Analysis of a Study in Ontario, Canada
Published in the British Medical Journal in August 2021, the paper reported the effectiveness of the mRNA vaccines in early 2021, shortly after their authorization.
This research was typical of vaccine studies from that time. Effectiveness was estimated in a "real-world" setting; namely, an observational study during a vaccination campaign. The study period (mid-December 2020 through mid-April 2021) included the peak of a Covid winter wave in early January. We'll discuss later a strong bias called confounding by background infection risk.
The design was a variation of the case-control study, the test-negative design. Eligible subjects underwent a PCR test because of Covid-like symptoms. Cases tested positive; controls tested negative. As usual, odds ratios were computed, and effectiveness was computed as 1 minus the odds ratio (expressed in percent). The sample size was large: 53,270 cases and 270,763 controls.
The authors reported the following key results (my italics):
"Vaccine effectiveness against symptomatic infection observed ≥14 days after one dose was 60% (95% confidence interval 57% to 64%), increasing from 48% (41% to 54%) at 14-20 days after one dose to 71% (63% to 78%) at 35-41 days. Vaccine effectiveness observed ≥7 days after two doses was 91% (89% to 93%)."
Like almost every study of effectiveness, the authors discarded early events. As explained elsewhere, this practice introduces a bias called immortal time, or case-counting window bias. Not only does it obscure possible early harmful effects, but it also effectively leads to overestimation of effectiveness. RFK, Jr. alluded to this bias in non-technical terms (see video clip).
The correct approach is simple. We should estimate effectiveness from the administration of the first dose to later timepoints (built-up immunity). My tab...
In the years that I served as an associate editor (for The American Journal of Epidemiology), I have seen the entire spectrum of "peer reviews" - from meticulous, thoughtful critiques whose authors evidently invested several hours in the task to sketchy reviews that reflected carelessness and incompetence. I have read friendly reviews by admirers of the authors and hostile reviews by their enemies. (It is not difficult to tell from the tone.) In the practice of science, human beings still behave like human beings.
Matters got worse during the pandemic. Studies that praised the Covid vaccines were quickly certified "peer-reviewed," whereas critical, post-publication peer review was suppressed. As a result, we now have a historical collection of published poor science. It cannot be erased, but it is time to start correcting the record.
Biomedical journals are not the platform. First, there is no formal section for open peer reviews of articles that were published long ago. Second, editors have no interest in exposing falsehoods that were published in their journals. Third, the censorship machine is still in place. So far, I was able to break it only once, and it wasn't easy.
So, how can we try to correct the record, and where?
Let me make a suggestion to my colleagues in epidemiology, biostatistics, and related methodological fields who preserved their critical thinking during the pandemic. Choose one article or more about the Covid vaccines and submit your peer review to Brownstone Journal. If it is interesting and well-written, there is a good chance it will be posted. I advise cherry-picking: find those peer-reviewed articles that irritated you most, either because they were pure nonsense or because the correct inference was strikingly different. And if you posted short critiques on Twitter (now X) or thorough reviews on other platforms, expand, revise, and submit them to Brownstone. Perhaps we can slowly create an inventory of critical reviews, restoring some trust in the scientific method and in biomedical science.
Here is an example.
A Review and a Re-Analysis of a Study in Ontario, Canada
Published in the British Medical Journal in August 2021, the paper reported the effectiveness of the mRNA vaccines in early 2021, shortly after their authorization.
This research was typical of vaccine studies from that time. Effectiveness was estimated in a "real-world" setting; namely, an observational study during a vaccination campaign. The study period (mid-December 2020 through mid-April 2021) included the peak of a Covid winter wave in early January. We'll discuss later a strong bias called confounding by background infection risk.
The design was a variation of the case-control study, the test-negative design. Eligible subjects underwent a PCR test because of Covid-like symptoms. Cases tested positive; controls tested negative. As usual, odds ratios were computed, and effectiveness was computed as 1 minus the odds ratio (expressed in percent). The sample size was large: 53,270 cases and 270,763 controls.
The authors reported the following key results (my italics):
"Vaccine effectiveness against symptomatic infection observed ≥14 days after one dose was 60% (95% confidence interval 57% to 64%), increasing from 48% (41% to 54%) at 14-20 days after one dose to 71% (63% to 78%) at 35-41 days. Vaccine effectiveness observed ≥7 days after two doses was 91% (89% to 93%)."
Like almost every study of effectiveness, the authors discarded early events. As explained elsewhere, this practice introduces a bias called immortal time, or case-counting window bias. Not only does it obscure possible early harmful effects, but it also effectively leads to overestimation of effectiveness. RFK, Jr. alluded to this bias in non-technical terms (see video clip).
The correct approach is simple. We should estimate effectiveness from the administration of the first dose to later timepoints (built-up immunity). My tab...
Comments
In Channel