PubMed | U.S. Food and Drug Administration and FDA CDER
Type: | Journal: Pharmacoepidemiology and drug safety | Year: 2016
Our study sought to systematically evaluate protocol-specified study methodology in prospective pregnancy exposure registries including pre-specified pregnancy outcomes, power calculations for sample size, and comparator group selection.U.S. pregnancy exposure registries designed to evaluate safety of drugs or biologics were identified from www.clinicaltrials.gov, the FDAs Office of Womens Health website, and the FDAs list of postmarketing studies. Protocols or similar documentation were obtained.We identified 35 U.S. registries for drugs or biologic use during pregnancy. All registries assessed risk for overall major congenital malformations. Pre-specified target enrollment was stated for 18 (51%) registries, and ranged from 150 to 500 exposed pregnancies (median 300). Thirty-two (91%) registries identified at least one comparison group, but only nine (26%) planned to use an internal comparator. The most common external comparator group (n=24, 69%) was the Metropolitan Atlanta Congenital Defects Program (MACDP).No registries were designed to have sufficient power to assess specific malformations, despite the plausibility that most teratogens cause specific defects. Only half of the registries included a power analysis. Despite their common use, external comparators, including MACDP, have important limitations. In the absence of randomized controlled trial data in pregnant women, pregnancy registries remain an important tool as part of a comprehensive pregnancy surveillance program; however, pregnancy registries alone may not be sufficient to obtain adequate data regarding risks of specific malformations. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
PubMed | FDA CDER, D I Ivanovsky Institute Of Virology and RAS Shemyakin Ovchinnikov Institute of Bioorganic Chemistry
Type: | Journal: Virus research | Year: 2015
We believe that the monitoring of pleiotropic effects of the hemagglutinin (HA) mutations found in H5 escape mutants is essential for accurate prediction of mutants with pandemic potential. In the present study, we assessed multiple characteristics of antibody-selected HA mutations. We examined the pH optimum of fusion, HA heat inactivation, affinity to sialyl receptors, and in vitro and in vivo replication kinetics of various influenza H5 escape mutants. Several amino acid substitutions, including T108I, K152E, R162G, and K218N, reduced the stability of HA as determined by heat inactivation, whereas S128L and T215A substitutions were associated with significant increases in HA thermostability compared to the respective wild-type viruses. HA mutations at positions 108, 113, 115, 121, 123, 128, 162, and 190 and substitutions at positions 123, 199, and 215 affected the replicative ability of H5 escape mutants in vitro and in vivo, respectively. The T108I substitution lowered the pH optimum of fusion and HA temperature stability while increasing viral replicative ability. Taken together, a co-variation between antigenic specificity and different HA phenotypic properties has been demonstrated.
Huang L.,FDA CDER |
Midthune D.,U.S. National Institutes of Health |
Krapcho M.,Management Information Services Inc. |
Zou Z.,Management Information Services Inc. |
And 2 more authors.
Biometrical Journal | Year: 2013
Cancer registries collect cancer incidence data that can be used to calculate incidence rates in a population and track changes over time. For incidence rates to be accurate, it is critical that diagnosed cases be reported in a timely manner. Registries typically allow a fixed amount of time (e.g. two years) for diagnosed cases to be reported before releasing the initial case counts for a particular diagnosis year. Inevitably, however, additional cases are reported after the initial counts are released; these extra cases are included in subsequent releases that become more complete over time, while incidence rates based on earlier releases will underestimate the true rates. Statistical methods have been developed to estimate the distribution of reporting delay (the amount of time until a diagnosed case is reported) and to correct incidence rates for underestimation due to reporting delay. Since the observed reporting delays must be less than the length of time the registry has been collecting data, most methods estimate a truncated delay distribution. These methods can be applied to a group of registries that began collecting data in the same diagnosis year. In this paper, we extend the methods to two groups of registries that began collecting data in two different diagnosis years (so that the delay distributions are truncated at different times). We apply the proposed method to data from the National Cancer Institute's Surveillance Epidemiology and End Results (SEER) program, a consortium of U.S. cancer registries that includes nine registries with data collection beginning in 1981 and four registries with data collection beginning in 1992. We use the method to obtain delay-adjusted incidence rates for melanoma, liver cancer, and Hodgkin lymphoma. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
PubMed | FDA CDER, Us Consumer Product Safety Commission, EPA NHEERL EPHD CIB, ILS and U.S. National Institutes of Health
Type: | Journal: Journal of applied toxicology : JAT | Year: 2017
The replacement of animal use in testing for regulatory classification of skin sensitizers is a priority for US federal agencies that use data from such testing. Machine learning models that classify substances as sensitizers or non-sensitizers without using animal data have been developed and evaluated. Because some regulatory agencies require that sensitizers be further classified into potency categories, we developed statistical models to predict skin sensitization potency for murine local lymph node assay (LLNA) and human outcomes. Input variables for our models included six physicochemical properties and data from three non-animal test methods: direct peptide reactivity assay; human cell line activation test; and KeratinoSens assay. Models were built to predict three potency categories using four machine learning approaches and were validated using external test sets and leave-one-out cross-validation. A one-tiered strategy modeled all three categories of response together while a two-tiered strategy modeled sensitizer/non-sensitizer responses and then classified the sensitizers as strong or weak sensitizers. The two-tiered model using the support vector machine with all assay and physicochemical data inputs provided the best performance, yielding accuracy of 88% for prediction of LLNA outcomes (120 substances) and 81% for prediction of human test outcomes (87 substances). The best one-tiered model predicted LLNA outcomes with 78% accuracy and human outcomes with 75% accuracy. By comparison, the LLNA predicts human potency categories with 69% accuracy (60 of 87 substances correctly categorized). These results suggest that computational models using non-animal methods may provide valuable information for assessing skin sensitization potency. Copyright 2017 John Wiley & Sons, Ltd.
PubMed | FDA CDER and RAS D. I. Ivanovsky Institute of Virology
Type: Journal Article | Journal: Archives of virology | Year: 2016
We assessed the pH optimum of fusion, HA thermostability, and in vitro replication kinetics of previously obtained influenza H9 escape mutants. The N198S mutation significantly increased the optimum pH of fusion. Four HA changes, S133N, T189A, N198D, and L226Q, were associated with a significant increase in HA thermostability compared to the wild-type virus. HA amino acid changes at positions 116, 133, 135, 157, 162, and 193 significantly decreased the replicative ability of H9 escape mutants in vitro. Monitoring of pleiotropic effects of the HA mutations found in H9 escape mutants is essential for accurate prediction of all possible outcomes of immune selection of H9 influenza A viruses.
PubMed | EPA OCSPP OPP HED, Us Consumer Product Safety Commission, EPA NHEERL EPHD CIB, ILS and 2 more.
Type: Journal Article | Journal: Journal of applied toxicology : JAT | Year: 2016
One of the top priorities of the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM) is the identification and evaluation of non-animal alternatives for skin sensitization testing. Although skin sensitization is a complex process, the key biological events of the process have been well characterized in an adverse outcome pathway (AOP) proposed by the Organisation for Economic Co-operation and Development (OECD). Accordingly, ICCVAM is working to develop integrated decision strategies based on the AOP using in vitro, in chemico and in silico information. Data were compiled for 120 substances tested in the murine local lymph node assay (LLNA), direct peptide reactivity assay (DPRA), human cell line activation test (h-CLAT) and KeratinoSens assay. Data for six physicochemical properties, which may affect skin penetration, were also collected, and skin sensitization read-across predictions were performed using OECD QSAR Toolbox. All data were combined into a variety of potential integrated decision strategies to predict LLNA outcomes using a training set of 94 substances and an external test set of 26 substances. Fifty-four models were built using multiple combinations of machine learning approaches and predictor variables. The seven models with the highest accuracy (89-96% for the test set and 96-99% for the training set) for predicting LLNA outcomes used a support vector machine (SVM) approach with different combinations of predictor variables. The performance statistics of the SVM models were higher than any of the non-animal tests alone and higher than simple test battery approaches using these methods. These data suggest that computational approaches are promising tools to effectively integrate data sources to identify potential skin sensitizers without animal testing. Published 2016. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
Seaton M.,FDA CDER
Toxicologic Pathology | Year: 2014
Results of early nonclinical "General Toxicology" studies are used to set a safe starting dose for first-in-human (FIH) clinical trials. In FIH trials, the research subjects are typically healthy volunteers who have little to gain but much to lose if a trial goes wrong. With that in mind, good laboratory practice regulations require that a standardized system be used for the conduct, documentation, and retention of study-related materials. The study pathologist, working within that system of standards, documentation, and oversight, is key to the identification of potential target organs of toxicity and other toxicologically significant findings. © 2013 by The Author(s).
Smith F.,FDA CDER |
Hammerstrom T.,FDA CDER |
Soon G.,FDA CDER |
Zhou S.,FDA CDER |
And 4 more authors.
Drug Information Journal | Year: 2011
The first meta-analysis of pivotal HIV study results utilizing data from 18 clinical trials involving seven NDAs with 8,046 patients of multiple NDA submissions for the treatment of HIV infection was used to determine if we can use a simplified version of the TLOVR algorithm for accelerated approval at week 24 and possibly for traditional approval at week 48. Standardized data sets for HIV RNA viral load, demography, CD4 counts, and discontinuation were created. These raw data sets used CDISC study data tabulation model naming conventions for most of the variables. Results obtained using the TLOVR algorithm, which utilized data from every visit to consider the pattern of HIV responses, were compared to a less complicated snapshot approach that only utilized HIV RNA data at the visit of interest. Given the similarity in results between the TLOVR and snapshot approaches, it appears that correcting for intermittent spikes in HIV RNA levels with the TLOVR algorithm does not have much regulatory impact. © 2011 Drug Information Association, Inc.
Johnson D.H.,U.S. National Institutes of Health |
Via L.E.,U.S. National Institutes of Health |
Kim P.,U.S. National Institutes of Health |
Laddy D.,Aeras |
And 3 more authors.
Nuclear Medicine and Biology | Year: 2014
Nearly 20. years after the World Health Organization declared tuberculosis (TB) a global public health emergency, TB still remains a major global threat with 8.6 million new cases and 1.3 million deaths annually. Mycobacterium tuberculosis adapts to a quiescent physiological state, and is notable for complex interaction with the host, producing poorly-understood disease states ranging from latent infection to fully active disease. Of the approximately 2.5 billion people latently infected with M. tuberculosis, many will develop reactivation disease (relapse), years after the initial infection. While progress has been made on some fronts, the alarming spread of multidrug-resistant, extensively drug-resistant, and more recently totally-drug resistant strains is of grave concern. New tools are urgently needed for rapidly diagnosing TB, monitoring TB treatments and to allow unique insights into disease pathogenesis. Nuclear bioimaging is a powerful, noninvasive tool that can rapidly provide three-dimensional views of disease processes deep within the body and conduct noninvasive longitudinal assessments of the same patient. In this review, we discuss the application of nuclear bioimaging to TB, including the current state of the field, considerations for radioprobe development, study of TB drug pharmacokinetics in infected tissues, and areas of research and clinical needs that could be addressed by nuclear bioimaging. These technologies are an emerging field of research, overcome several fundamental limitations of current tools, and will have a broad impact on both basic research and patient care. Beyond diagnosis and monitoring disease, these technologies will also allow unique insights into understanding disease pathogenesis; and expedite bench-to-bedside translation of new therapeutics. Finally, since molecular imaging is readily available for humans, validated tracers will become valuable tools for clinical applications. © 2014 .
A new path forward: the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) and National Toxicology Program's Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)
PubMed | FDA CDER, U.S. National Institutes of Health, U.S. Department of Agriculture and Us Consumer Product Safety Commission
Type: Journal Article | Journal: Journal of the American Association for Laboratory Animal Science : JAALAS | Year: 2015
In 2000, the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) was congressionally established, with representatives from Federal regulatory and research agencies that require, use, generate, or disseminate toxicologic and safety testing information. For over 15 y, ICCVAM and the National Toxicology Programs Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) have worked together to promote the development, validation, and regulatory acceptance of test methods that replace, reduce, or refine the use of animals in regulatory testing. In 2013, both NICEATM and ICCVAM underwent major changes to their operating paradigms, to increase the speed and efficiency of regulatory approval and industry adoption of 3Rs testing methods within the United States and internationally. Accordingly, increased emphasis has been placed on international activities, primarily through interaction with the Organization for Economic Cooperation and Development and participation in the International Cooperation on Alternative Test Methods. In addition, ICCVAM has committed to increasing public awareness of and transparency about federal agencies 3R activities and to fostering interactions with stakeholders. Finally, although it continues to support ICCVAM, NICEATMs work now includes validation support for Tox21, a collaboration aimed at identifying in vitro methods and computational approaches for testing chemicals to better understand and predict hazards to humans and the environment. The combination of more efficient operating paradigms, increased international collaboration, improved communication and interaction with stakeholders, and active participation in Tox21 likely will substantially increase the number of 3Rs methods developed and used in the United States and internationally.