Patel N.R.,Cytel, Inc |
Patel N.R.,Massachusetts Institute of Technology |
Antonijevic Z.,Cytel, Inc |
Statistics in Medicine | Year: 2013
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. © 2013 John Wiley & Sons, Ltd.
Gao P.,The Medicines Company |
Liu L.,Cytel, Inc |
Mehta C.,Cytel, Inc |
Mehta C.,Harvard University
Biometrical Journal | Year: 2013
A method of testing for noninferiority followed by testing for superiority in an adaptive group sequential design is presented. The method permits a data-dependent increase in sample size without any inflation of type-1 error. Closed-form expressions for computing conditional power and the sample size required to achieve any desired conditional power are derived. A new statistical method for performing inference on the primary efficacy parameter is derived. The method is used to obtain the p-value, median-unbiased point estimate and confidence interval for the efficacy parameter. For normal endpoints with known variance, the coverage of the confidence interval is exact. In other settings, the coverage is exact for large samples. An illustrative example is provided in which the methods of testing and estimation are applied to an actual clinical trial of acute bacterial skin and skin-structure infection. The operating characteristics of the trial are obtained by simulation and demonstrate that the type-1 error is preserved, the point estimate is median unbiased, and the confidence interval provides exact coverage up to Monte Carlo accuracy. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quinlan J.,Cytel, Inc |
Gaydos B.,Eli Lilly and Company |
MacA J.,Novartis |
Clinical Trials | Year: 2010
Background This review discusses barriers to implementing adaptive designs in a pharmaceutical R&D environment and provides recommendations on how to overcome challenges. A summary of findings from a survey conducted through PhRMA's working group on adaptive designs is followed by a report based on our experience as statistical and clinical consultants to project teams charged with establishing the clinical development strategy for investigational compounds and interested in applying innovative approaches. Findings and recommendations Adaptive designs require additional work in that clinical trial simulations are needed to develop the design. Some project teams, due to time and resource constraints, are unable to invest the additional effort required to conduct necessary scenario analyses of options through simulation. We recommend formally integrating the planning time for scenario analyses and to incentivize optimal designs (e.g., designs offering the highest information value per resource unit invested). Regardless of the trial design ultimately chosen, quantitatively comparing alternative trial design options through simulation will enable earlier and better decision making in the context of the overall clinical development plan. Adhering to 'Good Adaptive Practices' will be key to achieving this goal. Outlook Implementing adaptive designs efficiently requires top-down and bottom- up support and the willingness to invest into integrated process and information technology infrastructures. Success is conditional on the willingness of the R&D environment to embrace the implementation of adaptive designs as a Change Management Initiative in the spirit of the Critical Path of the Food and Drug Administration. © 2010 The Author(s).
Antonijevic Z.,Cytel, Inc
Optimization of Pharmaceutical R and D Programs and Portfolios: Design and Investment Strategy | Year: 2015
Very little has been published on optimization of pharmaceutical portfolios. Moreover, most of published literature is coming from the commercial side, where probability of technical success (PoS) is treated as fixed, and not as a consequence of development strategy or design. In this book there is a strong focus on impact of study design on PoS and ultimately on the value of portfolio. Design options that are discussed in different chapters are dose-selection strategies, adaptive design and enrichment. Some development strategies that are discussed are indication sequencing, optimal number of programs and optimal decision criteria. This book includes chapters written by authors with very broad backgrounds including financial, clinical, statistical, decision sciences, commercial and regulatory. Many authors have long held executive positions and have been involved with decision making at a product or at a portfolio level. As such, it is expected that this book will attract a very broad audience, including decision makers in pharmaceutical R…D, commercial and financial departments. The intended audience also includes portfolio planners and managers, statisticians, decision scientists and clinicians. Early chapters describe approaches to portfolio optimization from big Pharma and Venture Capital standpoints. They have stronger focus on finances and processes. Later chapters present selected statistical and decision analysis methods for optimizing drug development programs and portfolios. Some methodological chapters are technical; however, with a few exceptions they require a relatively basic knowledge of statistics by a reader. © Springer International Publishing Switzerland 2015.
Agency: Department of Health and Human Services | Branch: | Program: SBIR | Phase: Phase II | Award Amount: 750.00K | Year: 2012
The Phase II SBIR contract proposal addresses a problem of fundamental importance for the design and analysis of Phase 3 randomized clinical trials. Over the past decade the design of such trials has been complicated by the desire to ask more than one question, within a single trial, concerning the efficacy and safety of a new therapeutic agent. For example, whereas the large phase II trials conducted in the 80s and 90s were typically two-arm tirals involving a single endpoint, it is now common to include two or more dose groups of the experimental therapy along with an active comparator and possibly placebo as well. Moreover, owing to the complex etiology of many chronic conditions, it is customary to attempt to make multiple claims of efficacy across several endpoints. As a result, contemporary clinical trials often involve multiple hierarchical objectives with logical relationships among them which are such that testing for some objective is conditional on the positive or negative outcome of the test on another objective. It is a regulatory requirement that the analysis of data from such trials, where multiple claims of efficacy may be made in teh product label, must be handled by multiple comparison procedures that guarantee strong control of the family wise error rate (FWER). Commercial grade, validated software to perform sample size calculations or generate multiplicity adjusted inferences is, however, limited. The overall goal of this project is to develop a professional, robust and commercially viablesoftware package for handling multiplicity at both the design and analysis phases of a clinical trial. The software will address the following three major sources of multiplicity: hypothesis testing of mulitple treatment arms versus a common control, hypothesis testig with respect to multiple endpoints, and group sequential testing of the same hypothesis repeatedly over information time. The software so developed will be fully integrated into Cytel's East software system.
Agency: Department of Health and Human Services | Branch: National Institutes of Health | Program: SBIR | Phase: Phase I | Award Amount: 99.56K | Year: 2015
DESCRIPTION provided by applicant The overarching goal of the proposed research is to develop practical modeling tools including exact regression procedures for small or sparse samples of correlated categorical data Such outcomes are common in biomedical research especially in areas such as genetics ophthalmology and teratology One can encounter correlated categorical data wherever multiple outcomes are measured on an individual over time or on several different individuals who share common genetic or environmental exposures A large body of methods has been developed for analyzing correlated categorical outcomes which conventionally rely on large sample distributional assumptions e g approximate normality to justify their inferences When faced with a small or sparse sample of categorical data investigators have few viable analytic options and none that allow for exact inferences with regard to estimation Our proposed work will fill this gap building on critical recent developments of both appropriate models and computational technology During Phase I of this project we will accomplish this by developing an analogue to conditional logistic regression for correlated categorical data constructing an efficient network graphical algorithm for rapi computation of the exact distribution in Aim and Investigating the feasibility of incorporating these procedures into a SAS PROC We plan to expand this work in Phase II by incorporating our new tools as a module in the LogXact software package extending the exact regression procedure to accommodate Poisson and polychromous regression for correlated data and significantly improving the computational efficiency of these new tools through efficient Monte Carlo sampling and parallel processing We will also create a module for a SAS PROC making these methods as widely available as possible to researchers and analysts PUBLIC HEALTH RELEVANCE Many biomedical and public health studies make observations that are correlated or related e g when individuals are measured repeatedly over time or when subjects are sampled from the same family or group When such samples are small conventional statistical methods that account for this correlation may be inaccurate This project will develop new software tools to help investigators more accurately analyze data from studies that involve small samples of correlated data
Agency: Department of Health and Human Services | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 100.00K | Year: 2010
The goal of this project is to develop a prototype software for some basic parametric and nonparametric multiple comparison procedures allowing the analysis of clinical trials data by multiple comparison procedures that guarantee strong control of the family wise error rate (FWER). Regulators at the FDA have specifically identifed the statistical handling of multiple endpoints in clinical trials as a integral component of the Critical Path Initiative, which is intended to speed the process from the discovery of new molecular entities to the delivery of safe and efficatious medical compounds to patients.
Agency: Department of Health and Human Services | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 112.08K | Year: 2010
DESCRIPTION (provided by applicant): The overall goal of our research is to develop and extend powerful exact statistical tools for testing genetic association, and to incorporate these methods into two existing, widely used software packages (Cytel Studio, SAS) that will serve the needs of data analysts in pharmaceuticals, genetic epidemiology and public health, and other fields which require a greater understanding of the genetic determinants of complex disease. The demand for these analytic tools is rising dramatically, as rapid progress in genotyping technology is making it easier and less costly to measure sampled subjects for ever larger numbers of genetic markers. Genetic association represents an observed correlation between an investigative genetic marker and some physical trait, and can be assessed using either traditional case-control or family-based study designs. In either case, there are compelling applications of permutation or exact statistical approaches that are computationally challenging, yet are simply unavailable in currently used software or are implemented in a manner that requires excessive memory or computation. The computational innovations developed for this project will fill this gap, significantly improving the efficiency and power of existing tools used for genetic association under both family-based and case-control designs. During Phase I, we will build a prototype computer program that includes (i) exact family-based tests for both biallelic and multiallelic markers, and (ii) a permutation procedure that simultaneously tests genetic association assuming various modes of inheritance (i.e., recessive, dominant, additive, or codominant). We will also investigate the feasibility of incorporating these procedures into a SAS PROC, complementing and extending currently implemented SAS JMP Genomics procedures for testing genetic association. As a part of Phase II, we will integrate our Phase I tools into Cytel's StatXact system and into the SAS JMP Genomics system as an external procedure. We will additionally (i) extend the exact family-based procedures to accommodate haplotype data, (ii) develop and implement algorithms for permutation approaches to large-scale screening experiments, (iii) incorporate exact versions of basic genetic epidemiologic procedures, and (iv) incorporate efficient Monte Carlo sampling tools to extend the usefulness of the exact procedures to larger data sets. PUBLIC HEALTH RELEVANCE: Rapid progress in genotyping technology is making it easier and less costly to identify increasingly large numbers of genetic markers from sampled humans. These markers can be used to identify new genes potentially associated with many complex diseases. This project will provide genetics researchers with more accurate and efficient statistical tools for analyzing data from these studies.
Agency: Department of Health and Human Services | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 184.52K | Year: 2010
DESCRIPTION (provided by applicant): Two recent scientific developments, one in biostatistics and one in pharmacogenomics are likely to have a major impact on the design and monitoring of phase III and seamless phase II/III oncology trials, greatly improving their chances of success. Advances in human genomic studies have shown that many common mutations have prognostic and predictive value for identifying patients who are likely to benefit from a molecularly targeted agent. At the same time there has been a surge of interest within the biostatistics research community in the design of adaptive clinical trials. An adaptive trial is one in which early data obtained from the trial itself can be used to modify the future course of the trial, without undermining its integrity or statistical validity (Gallo et. al., 2006a, 2006b). Adaptive designs play a role in both early and late stages of clinical drug development. Our interest, however, is in late stage confirmatory trials (late phase II and phase III), where the goal is to improve the chances for regulatory approval of a new medical compound. The overall failure rate of compounds even at this late stage is 45 %, and for oncology trials the failure rate is almost 60 % (Kola and Landis, 2004). It is worth noting that by this time significant proportions of the costs of discovering and developing a drug have been incurred. Among the many causes for this attrition, a major one is choosing the wrong population for the test drug. It is becoming increasingly apparent that treatment effects can differ greatly between different genomic patient subsets. We wish to promote a new type of design for confirmatory trials in oncology.in which we use the fact that predictive markers can identify patients who are sensitive to distinct therapeutic agents, such that patients with a positive marker might benefit differentially from the targeted therapy compared to patients with a negative marker. Predictive markers provide the opportunity to conduct so called population enrichment designs (Temple, 2005). Genomic technologies such as microarrays and single nucleotide polymorphism genotyping may be used to identify the marker status of patients during the screening phase of a trial. If a marker is considered predictive for the test drug one could in principle restrict enrollement to the subset of patients carrying the favorable genotype, thereby enriching the study population and increasing the chance of a successful trial. Our goal is to develop statistical software that will support these types of designs. The software will utilize the concept of two-stage adaptive designs in which the results at the first stage may be used to enrich the population at the second stage if the biomarker is predictive. PUBLIC HEALTH RELEVANCE: This project will support the development of new research methods and software for oncology trials in which predictive biomarkers may be used to enrich the population as the second stage of the design based on results observed at the first stage.
Agency: Department of Health and Human Services | Branch: | Program: SBIR | Phase: Phase II | Award Amount: 996.64K | Year: 2012
DESCRIPTION (provided by applicant): Categorical outcomes are ubiquitous in biomedical research, and generalized linear models (GLMs) represent the most widely applied methodology for testing associations between categorical variables and fixed investigative factors. Logistic regression in particular is the most frequently used model for binary data and has widespread applicability in the health, behavioral, and physical sciences. King and Ryan (2002) stated that there were 2,770 research papers published in 1999 in which logistic regression was in the title of the paper or among the keywords. King and Zeng (2001) referred to the use of the maximum likelihood method in logistic regression as the nearly universal method . Maximum likelihood estimates (MLE) for logistic regression are based on large sample approximations that are reliable for problems with large samples and when the proportion of responses is not too small or too large. However, it has been known for several years that MLE are not reliable for small, sparse or unbalanced datasets, with the latter referring to a considerable difference between the number of zeros and ones of the response variable. Recent research has suggested a flexible means of correcting MLE bias and improving performance using a penalized likelihood-based approach, but the underlying theory has not been fully applied and implemented for practical use. In this project, we will extend the work begun during Phase 1 with logistic regression by (1) implementing the bias correction approach for a variety of other GLM's that include Poisson, multinomial, negative binomial, and censored survival data; (2) provide new diagnostic procedures that identify potential problems with near separability and MLE bias; (3) implement and evaluate an exact target estimation approach for bias correction in logistic regression; (4) improve the computational algorithms required for Aims 1-3; and (5) additionally implement the procedures in a SAS PROC. Given the ubiquity of categorical regressionin public health and biomedical research, the final product of this effort will provide a critical intermediate alternative when analyzing data for which standard large-sample methods are unreliable and small-sample exact methods are infeasible. PUBLIC HEALTH RELEVANCE: Generalized linear models (such as logistic regression) for categorical data have widespread applicability in the health sciences. Maximum likelihood, the nearly universal method for computing estimates in generalized linear regression models, has been known to have high bias and mean square error for small, sparse or unbalanced datasets. We propose to develop commercial software that incorporates several new methods that have lower bias and mean square error in logistic regression and other generalized linear models and Cox proportional hazard models.