A disturbing number of trials lack rigorous statistical methods in both their design and analysis, researchers demonstrate in a thought-provoking new study. The work, which appears in the April 2 issue of the Journal of National Cancer Institute, shows that a large number of published trials are coming to erroneous conclusions.
The group, led by David Murray, PhD, from the Ohio State University in Columbus, outlined the fact that many investigators using group-randomized trials do not adequately attend to the special design and analytic challenges of these studies. Failure to do so can lead to reporting type 1 errors as real effects — a problem that is misleading investigators and policy makers and slowing progress in cancer research.
"Murray et al. have provided the cancer research community — especially those engaged in screening for or preventing cancer — an excellent but somewhat disheartening summary of the state of the literature in regard to the use of what is an invaluable tool in the researchers' arsenal, the group-randomized trial," Timothy Church, PhD, from the University of Minnesota School of Public Health in Minneapolis, writes in an accompanying editorial.
Extending the idea of the randomized controlled trial to interventions on groups of individuals rather than 1 at a time is simple, but deceptively so. The power of a trial randomizing individuals is still there, but controlling for and quantifying the uncertainty in a group-randomized trial, compared with a basic randomized controlled trial, is no longer a simple matter, Dr. Church points out.
"Because individuals within the unit of randomization may have correlated outcomes, calculations based on sampling variability must take this correlation into account," he writes.
Ignoring this correlation will fool the researcher into believing there is more certainty to results than is justifiable, he warns. "It has taken the methodological community some time to come to grips with this challenge, and it appears that researchers still lag behind."
Design and Analytic Challenges Overlooked
To assess the problem, Dr. Murray and his team identified group-randomized trials on cancer prevention and control by searching the peer-reviewed literature with sets of key words. They identified 75 articles published in 41 journals.
Investigators were unable to find any mention of sample-size calculations in nearly half the papers, and less than a quarter gave an appropriate sample-size calculation.
When evaluating the analytic methods, the authors found that more than half used invalid methods of analysis — primarily methods that understate variability and overstate statistical significance.
"Whereas flaws in design can lead to underpowered studies and perhaps point to gaps in the knowledge of those who review grant proposals, flaws in the analysis, as Murray et al. point out, can lead to false findings of efficacy," Dr. Church writes.
"The appropriate analysis of an underpowered study will at least reveal that deficit," he added. "An inappropriate analysis of a trial hides the true meaning, whether it is adequately designed or not."
Investigators need not become methodologists or statisticians to improve the methods used in their studies, the researchers emphasize.
They can benefit substantially by collaborating with methodologists or statisticians who know these issues well, in the same way they benefit by collaborating with interventionists who understand their part of the health-promotion and disease-prevention research process.
"Ten years ago, it was difficult to find methodologists or statisticians who were familiar with the design and analytic issues involved in group-randomized trials, but that is no longer the case," they write. "Expertise in this area is available at many of the major research universities and institutes."
The researchers have disclosed no relevant financial relationships.
J Natl Cancer Inst. 2008;100:483-491.
Reviewed by Dr. Ramaz Mitaishvili
http://www.rmgh.net