Science By Press Release

If a study has a press release strategy, it’s fair to assume that its motivations are more aligned with good press rather than good science. On March 18th, the world was treated to another negative headline: “Ivermectin Didn’t Reduce Covid-19 Hospitalizations in Largest Trial to Date”. It wasn’t possible for the journalist to interrogate that claim, because the study hadn’t been published. Nonetheless, the headline was dutifully beamed out by the Wall Street Journal in another example of “Science by Press Release.”

Science By Press Release 1

PR teams plan to get ‘good press’ before the study is published so that even if the study is terrible, the headline is shared far and wide before its claims can be tested. If headlines are what you want, it’s a win-win strategy.

Two weeks after the world was gifted with the headlines that “Ivermectin doesn’t work” we are presented with the published study. It can now be interrogated after the headlines already made their way around the world, and I don’t expect to see headlines about any of the issues I’m about to detail. However, all is not lost, we can undo some of that damage if the problems I am about to detail reach people in a position to do something about it.

This is the Ivermectin TOGETHER trial, something we’ve waited years for. It’s finally here! And as many of us predicted, there are some serious issues with it. Despite the expected problems, very few were prepared for the number of serious issues that would be discovered. This post will focus only on the numerical issues which discredit the study. The digging will come later…for now, I’ll focus on the numerical issues.

Missing patients in the subgroup analysis

There were two arms to the study, Ivermectin and placebo. The study enrolled 679 patients in each arm, so 679 patients took Ivermectin and 679 took a placebo.

Part of the study looked at subgroups within those arms to compare how they did. It broke down patients by weight, cardiovascular disease, lung disease and ‘time since onset of symptoms’. That’s the number of days the patient had shown symptoms when they presented to the clinic to enrol in the study.

Here’s the issue, the size of the broken down groups should all add up to 679 patients, but they do not. In the Ivermectin ‘arm’ of the study, there’s an ‘age subgroup’ which lists 335 patients older or equal to 50 years old, and 295 younger than 50 years old. But that only adds to 630 patients, suggesting that 49 patients were neither younger, equal to, or older than 50 years old. These problems are apparent right the way through the subgroup analysis, where the totals rarely add up to 679 patients. A list of the missing patients is shown in the table below the original data, which is below.

Science By Press Release 2
Original data from Figure 2 – The Subgroup Analysis
Science By Press Release 3
Adding up the subgroups shows there are missing patients. Why?

The biggest discrepancy is in the ‘Time since Symptom Onset’ subgroup, where there are 155 patients missing from the Ivermectin arm, and 162 patients missing from the placebo arm. It suggests that patients may have been included in the study that were neither 0-3 days nor 4-7 days from symptom onset, which should have excluded them from the study.

It raises the question, what happened to these missing patients? Why were they excluded from the subgroup analysis? It is not a small discrepancy. In the ‘Time since symptom onset’ subgroup, there are 317 patients missing, which is 23% of the entire study sample.

Phil Harper is an immersive documentary producer and director who is currently figuring out python and is interested in crypto. He is formerly of Truthloader. This article was originally published on The Digger.

Do you have a tip or sensitive material to share with GGI? Are you a journalist, researcher or independent blogger and want to write for us? You can reach us at [email protected].

Leave a Reply