In a revealing look at the impact of funding on medical research, a new study found that clinical trials funded by drug companies and other for-profit entities were more likely to report positive findings than similar trials funded by nonprofit groups.
Trials that were jointly funded by for-profit and non-profit organizations had positive findings that fell about midway between the rates observed for either extreme.
"I'm not surprised that that is the case," said Adil Shamoo, a professor of biochemistry and bioethics at the University of Maryland, Baltimore, and co-founder of Citizens for Responsible Care and Research, which lobbies for the rights of patients and clinical trial participants.
Shamoo was not involved in the study, which was led by researchers at Harvard Medical School and appears in the May 17 issue of the Journal of the American Medical Association'.
A study published earlier this year found that industry is paying for more and more medical research, with a full half of studies now funded solely by the private sector.
And according to background information in this article, surveys of randomized trials conducted in the 1990s found that for-profit trials were more likely to report positive findings. Those surveys raised questions about the design and conduct of industry-funded clinical trials. They resulted in recommendations for ways to improve academic oversight of industry-sponsored research and to make sure that all clinical trials are registered and published.
It has not been clear, however, if this emerging recognition has led to any improvements.
To see if anything had changed, the study authors reviewed 324 trials involving cardiovascular medicines published between January 1, 2000, and July 30, 2005, in three top medical journals: JAMA, The Lancet and the New England Journal of Medicine' .
Twenty-one of the studies cited no funding source at all.
Of the 104 funded solely by nonprofits, 49 percent reported evidence favoring the newer treatment while 51 percent favored the existing standard of care or showed no difference between the two.
Of the 137 trials funded solely by for-profit entities, more than two-thirds (67.2 percent) favored the newer treatment.
There were 62 jointly funded trials, of which 56.5 percent favored the newer treatment.
Among 205 randomized trials evaluating new drugs, 39.5 percent of nonprofits, 54.4 percent of jointly funded trials, and 65.5 percent of for-profit trials leaned towards newer treatments, the researchers found.
Of 39 randomized trials looking at cardiovascular devices, 50 percent of nonprofit trials, 69.2 percent of jointly funded trials, and 82.4 percent of for-profit trials favored newer devices.
Regardless of the funding source, trials which used surrogate endpoints tended to report more positive findings (67 percent) than those using clinical endpoints (54.1 percent). A surrogate endpoint measures an outcome that is predictive of a clinical endpoint. So, for example, a clinical endpoint could be a heart attack, while a surrogate endpoint might be a certain blood marker that reflects a high risk for heart attack.
In response to the study, the Pharmaceutical Research and Manufacturers of America (PhRMA) issued a statement Tuesday saying, "The JAMA paper ... is informative and supports the fact that America's pharmaceutical research companies conduct top-quality, cutting-edge research on life-saving medicines so that patients can lead longer, healthier lives."
PhRMA Senior Vice President Caroline Loew added in the statement, "To help ensure quality, informative and reliable conclusions of a particular clinical trial, PhRMA member companies conduct carefully structured clinical trials at multiple locations -- to reduce the likelihood of possible single investigator bias -- and routinely have a large numbers of patients involved with such trials."
Some experts believe that study design is a main reason for such biases. "The outcome can be tremendously influenced literally by the A-to-Z of a clinical trial, by the type of question, the design of experiment, the type and characteristics of the human subjects selected, how you massage the data and analyze it, and where and what portion you publish," Shamoo said. "There are literally about 15 or 20 steps that can influence any experiment, not just a clinical trial."
The authors speculated that other factors might explain their findings. For example, negative findings are unlikely to be followed up with additional studies. Positive trials, on the other hand, are much more likely to get industry funding for continued study.
The U.S.
Food and Drug Administration also requires that any positive finding be replicated in subsequent trials, which may also help explain the findings.
Regardless of the cause, Shamoo said there's no one simple answer to the problem. Possible solutions include having multiple sources conducting similar trials, acknowledging apparent bias.
"The solution is multifaceted," he said. "As usual, there is no simple, black-and-white answer."