By Caitlin Rothermel, MA, MPH
My most vivid first experience with composite endpoints in clinical trials was in 2005 with the publication of the PROactive Trial (PROspective pioglitAzone Clinical Trial In macroVascular Events). PROactive reported on a composite outcome that included all-cause mortality, nonfatal myocardial infarction (MI), acute coronary syndrome, and 4 other events. It’s worth noting that the original PROactive composite endpoint included 1 additional item (cardiac intervention), and that the studies’ secondary endpoints were all the primary endpoint components as well as cardiovascular mortality, but more on that later.
Looking back, it seems likely that the PROactive researchers used this approach to try to ensure a significant benefit in an environment where the thiazolidinedione drug class was starting to lose its luster. However, it didn’t work out that way—the hazard ratio for the endpoint was 0.90, with 95% confidence intervals of 0.80-1.02. The study’s reported secondary outcomes (all-cause mortality, non-fatal MI, and stroke) were all statistically significant, and the study publication focused largely on these positive findings.
This publication, and the way it was interpreted, led to an uproar in the cardiovascular and diabetes communities, and the critiques of composite outcomes launched then still persist. Although composite endpoints can improve statistical efficiency compared with multiple single endpoints and reduce the likelihood of type 1 error, unless all endpoint components are equally likely to occur outcomes will be skewed towards the more frequent events. Similarly, unless all composite components are more or less equivalent in terms of clinical relevance, it can be challenging to determine real patient benefit.
In 2010, Rahimi and colleagues published a systematic review of outcome selection and the role of patient-reported outcomes (PROs) in cardiovascular trials published. Their key finding? PROs in cardiovascular trials are largely underused and/or uninterpretable. Specifically, the authors developed a matrix to determine whether a PRO would be useful for individual studies; 70% of studies where PROs would have been relevant did not use them. And when PROs were used, they were likely to be incorporated into composite endpoints “dominated by outcomes of questionable importance to patients.”
Of course, a PRO can always be used as a single endpoint. But then investigators are faced again with the multiple endpoint dilemma. And neither the composite nor single endpoint approach solves the issue of outcome-reporting bias, in which outcomes are retroactively altered (as was the case with PROactive) and/or only certain recorded outcomes are reported on.
It seems to me that the rapid development of PRO instruments in the past 5 years has outpaced our ability to best use them. One solution that’s being discussed is the creation of core outcome sets (COS). These sets would represent the minimum outcomes that should be measured in all clinical trials, specific to disease states. For more information, check out the Core Outcomes Measures in Effectiveness Trials initiative.
Also, the authors of the CONSORT (Consolidated Standards of Reporting Trials) Statement have released a new publication urging study investigators to select and report on PROs with greater clarity and precision.
What do you think? Is it appropriate to include PROs in a composite endpoint alongside more traditional clinical measures? Does it depend on the disease state? And are there better ways to analyze composite measures than we are currently using?
Caitlin Rothermel, MA, MPH is a medical and health economics writer. She lives in Seattle, WA with her family. You can learn more about her by visiting www.MedLitera.com.