Entity

Time filter

Source Type

Hartford, MD, United States

James Hung H.M.,OB OTS CDER | Wang S.-J.,Office of Biostatistics
Biometrical Journal | Year: 2010

Multiple testing problems are complex in evaluating statistical evidence in pivotal clinical trials for regulatory applications. However, a common practice is to employ a general and rather simple multiple comparison procedure to handle the problems. Applying multiple comparison adjustments is to ensure proper control of type I error rates. However, in many practices, the emphasis of the type I error rate control often leads to a choice of a statistically valid multiple test procedure but the common sense is overlooked. The challenges begin with confusions in defining a relevant family of hypotheses for which the type I error rates need to be properly controlled. Multiple testing problems are in a wide variety, ranging from testing multiple doses and endpoints jointly, composite endpoint, non-inferiority and superiority, to studying time of onset of a treatment effect, and searching for minimum effective dose or a patient subgroup in which the treatment effect lies. To select a valid and sensible multiple test procedure, the first step should be to tailor the selection to the study questions and to the ultimate clinical decision tree. Then evaluation of statistical power performance should come in to play in the next step to fine tune the selected procedure. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Wang S.-J.,Office of Biostatistics | Blume J.D.,Vanderbilt University
Pharmaceutical Statistics | Year: 2011

We present likelihood methods for defining the non-inferiority margin and measuring the strength of evidence in non-inferiority trials using the 'fixed-margin' framework. Likelihood methods are used to (1) evaluate and combine the evidence from historical trials to define the non-inferiority margin, (2) assess and report the smallest non-inferiority margin supported by the data, and (3) assess potential violations of the constancy assumption. Data from six aspirin-controlled trials for acute coronary syndrome and data from an active-controlled trial for acute coronary syndrome, Organisation to Assess Strategies for Ischemic Syndromes (OASIS-2) trial, are used for illustration. The likelihood framework offers important theoretical and practical advantages when measuring the strength of evidence in non-inferiority trials. Besides eliminating the influence of sample spaces and prior probabilities on the 'strength of evidence in the data', the likelihood approach maintains good frequentist properties. Violations of the constancy assumption can be assessed in the likelihood framework when it is appropriate to assume a unifying regression model for trial data and a constant control effect including a control rate parameter and a placebo rate parameter across historical placebo controlled trials and the non-inferiority trial. In situations where the statistical non-inferiority margin is data driven, lower likelihood support interval limits provide plausibly conservative candidate margins. © 2011 John Wiley & Sons, Ltd. Source


Shults J.,University of Pennsylvania | Guerra M.W.,Office of Biostatistics
Statistics in Medicine | Year: 2014

This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable. © 2014 John Wiley & Sons, Ltd. Source


Wang S.-J.,Office of Biostatistics | Hung H.M.J.,OB OTS CDER
Contemporary Clinical Trials | Year: 2013

There is a growing interest in pursuing adaptive enrichment for drug development because of its potential to achieve the goal of personalized medicine. There are many versions of adaptive enrichment proposed across many disease indications. Some are exploratory adaptive enrichment and others aim at confirmatory adaptive enrichment. In this paper, we give a brief overview on adaptive enrichment and the methodologies that are growing in statistical literature. A case example is provided to illustrate a regulatory experience that led to drug approval. There were two design elements used for adaptation in this case example: population adaptation and statistical information adaptation. We articulate the challenges in the implementation of a confirmatory adaptive enrichment trial. The challenges include logistic aspects on the appropriate choice of study population for adaptation and the ability to follow the pre-specified rules for statistical information or sample size adaptation. We assess the consistency of treatment effect before and after adaptation using the approach laid out in Wang et al. (2013). We provide the rationales for what would be an appropriate treatment effect estimate for reporting in the drug label. We discuss and articulate design considerations for adaptive enrichment among a dual-composite null hypothesis, a flexible dual-independent null hypothesis and a rigorous dual-independent null hypothesis. © 2013. Source


Alosh M.,OTS | Huque M.F.,Office of Biostatistics
Biometrical Journal | Year: 2013

A significant heterogeneity in response across subgroups of a clinical trial implies that the average response from the overall population might not characterize the treatment effect; and as noted by different regulatory guidances, can cause concerns in interpreting study findings and might lead to restricting treatment labeling. However, along with the challenges raised by the heterogeneity, recently there has been growing interest in taking advantage of the expected variability in response across subgroups to increase the chance of success of a trial by designing the trial with objectives of establishing efficacy claims for the total population and a targeted subgroup. For such trials, there have been several approaches to address the multiplicity issue with the two paths of success. This manuscript advocates the utility of setting a threshold on the treatment effect for the subgroups at the design stage to guide determination of the population labeling when significant findings for the total population have been established. Specifically, it proposes that licensing treatment for the total population requires, in addition to significant findings for this population, that the treatment effect in the least benefited (complementary) subgroup meets the treatment effect threshold at a minimum; otherwise, the treatment would be restricted to the targeted subgroup only. Setting such a threshold can be based on clinical considerations, including toxicity and adverse events, in addition to treatment effect in the subgroup. This manuscript expands some of the multiplicity approaches to account for the threshold requirement and investigates the impact of the threshold requirement on study power. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source

Discover hidden collaborations