Broglio S.P.,University of Illinois at Urbana - Champaign |
Schnebel B.,University of Oklahoma |
Sosnoff J.J.,University of Illinois at Urbana - Champaign |
Shin S.,University of Illinois at Urbana - Champaign |
And 3 more authors.
Medicine and Science in Sports and Exercise | Year: 2010
Introduction: Sport concussion represents the majority of brain injuries occurring in the United States with 1.6-3.8 million cases annually. Understanding the biomechanical properties of this injury will support the development of better diagnostics and preventative techniques. Methods: We monitored all football related head impacts in 78 high school athletes (mean age = 16.7 yr) from 2005 to 2008 to better understand the biomechanical characteristics of concussive impacts. Results: Using the Head Impact Telemetry System, a total of 54,247 impacts were recorded, and 13 concussive episodes were captured for analysis. A classification and regression tree analysis of impacts indicated that rotational acceleration (>5582.3 rads-2), linear acceleration (>96.1g), and impact location (front, top, and back) yielded the highest predictive value of concussion. Conclusions: These threshold values are nearly identical with those reported at the collegiate and professional level. If the Head Impact Telemetry System were implemented for medical use, sideline personnel can expect to diagnose one of every five athletes with a concussion when the impact exceeds these tolerance levels. Why all athletes did not sustain a concussion when the impacts generated variables in excess of our threshold criteria is not entirely clear, although individual differences between participants may play a role. A similar threshold to concussion in adolescent athletes compared with their collegiate and professional counterparts suggests an equal concussion risk at all levels of play. Copyright © 2010 by the American College of Sports Medicine.
Wang J.C.,National Institute of Statistical Science |
Opsomer J.D.,Colorado State University
Biometrika | Year: 2011
Survey estimators of population quantities such as distribution functions and quantiles contain nondifferentiable functions of estimated quantities. The theoretical properties of such estimators are substantially more complicated to derive than those of differentiable estimators. In this article, we provide a unified framework for obtaining the asymptotic design-based properties of two common types of nondifferentiable estimators. Estimators of the first type have an explicit expression, while those of the second are defined only as the solution to estimating equations. We propose both analytical and replication-based design-consistent variance estimators for both cases, based on kernel regression. The practical behaviour of the variance estimators is demonstrated in a simulation experiment. © 2011 Biometrika Trust.
Zhou Y.,East China Normal University |
Sedransk N.,National Institute of Statistical science
Statistics in Medicine | Year: 2013
Cardiac safety assessment in drug development concerns the ventricular repolarization (represented by electrocardiogram (ECG) T-wave) abnormalities of a cardiac cycle, which are widely believed to be linked with torsades de pointes, a potentially life-threatening arrhythmia. The most often used biomarker for such abnormalities is the prolongation of the QT interval, which relies on the correct annotation of onset of QRS complex and offset of T-wave on ECG. A new biomarker generated from a functional data-based methodology is developed to quantify the T-wave morphology changes from placebo to drug interventions. Comparisons of T-wave-form characters through a multivariate linear mixed model are made to assess cardiovascular risk of drugs. Data from a study with 60 subjects participating in a two-period placebo-controlled crossover trial with repeat ECGs obtained at baseline and 12 time points after interventions are used to illustrate this methodology; different types of wave form changes were characterized and motivated further investigation. © 2012 John Wiley & Sons, Ltd.
Jung S.-H.,Duke University |
Young S.S.,National Institute of Statistical science
Journal of Biopharmaceutical Statistics | Year: 2012
Microarray is a technology to screen a large number of genes to discover those differentially expressed between clinical subtypes or different conditions of human diseases. Gene discovery using microarray data requires adjustment for the large-scale multiplicity of candidate genes. The family-wise error rate (FWER) has been widely chosen as a global type I error rate adjusting for the multiplicity. Typically in microarray data, the expression levels of different genes are correlated because of coexpressing genes and the common experimental conditions shared by the genes on each array. To accurately control the FWER, the statistical testing procedure should appropriately reflect the dependency among the genes. Permutation methods have been used for accurate control of the FWER in analyzing microarray data. It is important to calculate the required sample size at the design stage of a new (confirmatory) microarray study. Because of the high dimensionality and complexity of the correlation structure in microarray data, however, there have been no sample size calculation methods accurately reflecting the true correlation structure of real microarray data. We propose sample size and power calculation methods that are useful when pilot data are available to design a confirmatory experiment. If no pilot data are available, we recommend a two-stage sample size recalculation based on our proposed method using the first stage data as pilot data. The calculated sample sizes are shown to accurately maintain the power through simulations. A real data example is taken to illustrate the proposed method. © 2012 Copyright Taylor and Francis Group, LLC.
Cox L.H.,National Institute of Statistical science
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2014
For decades, NSOs have used complementary cell suppression for disclosure limitation of tabular data, magnitude data in particular. Indications of its continued use abound, even though suppression thwarts statistical analysis of both the expert and the novice. We introduce methods for creating alternative tables that the NSO can release unsuppressed, while ensuring within statistical certainty that their analysis is conformal with analysis of the original. © Springer International Publishing Switzerland 2014.
Agency: NSF | Branch: Standard Grant | Program: | Phase: INFRASTRUCTURE PROGRAM | Award Amount: 9.90K | Year: 2016
This award supports the participation of junior researchers in a workshop held at the Joint Statistical Meetings in Chicago, Illinois in July and August 2016. The workshop focuses on effective technical writing for new researchers in the statistical sciences, who seek to publish their research or to present their research plans in the form of grant proposals for federal funding. Researchers, especially new researchers, often have difficulty disseminating their research results not because of the quality of the research but rather because of inappropriate choices of publication venues for the particular research and/or because of poor presentation of technical material to the chosen audience. The National Institute of Statistical Sciences and the American Statistical Association will manage the Workshop.
This workshop will open with tutorial sessions on the organization of material for a technical article or grant application, on technical writing techniques, and on the specific missions and audiences of key journals in the statistical sciences. Following the introductory tutorial, each participating new researcher will work individually with an experienced journal editor as mentor to address these issues on an individualized basis in a draft of the new researchers work. Revisions following this guidance will be critiqued by the mentor to assure that the new researchers implementation of writing techniques has been successful before the article or the grant proposal is submitted for review. More information about this activity can be found at http://www.amstat.org/meetings/wwjr/index.cfm?fuseaction=main. This award is jointly supported by the Infrastructure and Statistics programs in the NSF Division of Mathematical Sciences.
Agency: NSF | Branch: Continuing grant | Program: | Phase: | Award Amount: 750.21K | Year: 2010
This is a proposal to create a postdoctoral research program at the National Institute of Statistical Sciences (NISS) focused on problems and issues directly supportive of the mission of the Division of Science Resources Statistics (SRS).
The proposal is a novel approach of engaging two post-docs housed at NISS to improve the statistical methodologies used in SRS and other Federal statistical surveys while simultaneously being developed as survey practitioners.
Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 748.58K | Year: 2012
The NSF-Census Research Network (NCRN) consists of eight nodes, each comprised of researchers conducting innovative, high-impact, cross-disciplinary investigations of theory, methodology and computational tools of interest and significance to the Census Bureau, the federal statistical system and the broader research community. These nodes are located at Carnegie Mellon University, the University of Colorado at Boulder/University of Tennessee, Cornell University, Duke University/National Institute of Statistical Sciences (NISS), the University of Michigan, the University of Missouri, the University of Nebraska and Northwestern University. The NSF/Census Research Network Coordination Office (NCRN-CO), operated jointly by NISS and Cornell University, catalyzes and fosters communication and collaboration among the nodes of the NCRN, as well as focuses the networks relationships with multiple external stakeholder communities. These communities include federal statistical agencies, researchers in academia and the private sector, professional associations, international bodies, and the press and public. Among NCRN-CO activities are organizing semi-annual meetings of NCRN researchers; facilitating information sharing within the NCRN, using advanced communication and collaboration tools; supporting NCRN engagement with staff from the Census Bureau; assisting and coordinating educational activities at NCRN nodes; conducting workshops and seminars that disseminate the research products of the NCRN and engender feedback and engagement of non-NCRN researchers; and maintaining a website that provides access to research findings and their implications for a broad and diverse set of communities.
The NCRN-CO is a value-added activity, helping the NCRN nodes to leverage each others research achievements and leading to scientific and societal impact that exceeds the sum of the individual impacts. The NCRN-CO is the public face of the NCRN, through paths such as the website www.ncrn.info, quarterly newsletters and an NCRN-wide annual report, research briefs, news releases, special sessions at professional society meetings, and special issues of leading journals. The NCRN-CO aids the nodes in advancing the careers of postdoctorals and graduate students, as an investment in the future of official statistics in the United States.
Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 14.50K | Year: 2010
The Mathematical and Physical Sciences (MPS) community generates much of the data in science. Major experiments and facilities are now generating petabytes of data per year that must be distributed globally for analysis. Projects already in development will generate much larger
volumes at faster rates, approaching an exabyte per week, with exaflop computing capacity needed to perform the analysis. In addition to this growing number of prodigious data generators, virtually all of science is becoming data-intensive, with increasing size and/or complexity, even at the level of PIs in individual labs. This trend extends beyond MPS disciplines to:
biological data; financial, commercial, and retail data; audio and visual data; data assimilation and data fusion; and data in the humanities and social sciences. Virtually all disciplines need potentially radical new mathematical and statistical ways to handle future data sets if scientific advances are to be realized. The proposed MPS Workshop on Data-Enabled Science will provide a high-level assessment of the needs of the MPS communities, including anticipated data generation, capability and inability to mine the data for science, strengths and weaknesses of current efforts, and work on developing new algorithms and mathematical approaches. The workshop will also provide an assessment of the resource requirements for addressing these needs over the next five years.
Agency: NSF | Branch: Standard Grant | Program: | Phase: SCIENCE RESOURCES STATISTICS | Award Amount: 224.90K | Year: 2013
This postdoctoral research program at the National Institute of Statistical Sciences (NISS) comprises performing innovative research and creating usable products that not only support the mission of the National Center for Science and Engineering Statistics (NCSES) but also address the needs of the nation.
From a technical perspective, the research is framed by two statistical themes and two key societal issues. The first statistical theme is characterization of uncertainties arising from novel methods of integrating and analyzing data, addressing a critical need in an era of declining data collection budgets and decreasing participation in government surveys. The second theme centers on conducting experiments with real data, simulating phenomena of interest in order to evaluate, and in some cases enable, methodological advances. Key issues regarding surveys, such as how many times and by what means to contact nonrespondents, are too complex to be treated analytically, and infeasible to address with real world experiments; therefore simulation is effectively the only laboratory available. Specific research topics include data integration, prediction, model to design feedback, data-quality-aware statistical disclosure limitation and cost data quality tradeoffs. All Federal statistical agencies stand to benefit from the research, which will produce innovative theory, novel, methodology and algorithmic implementations, together with datasets, analyses, software and insights that inform future data collections.
Broader Impacts: The societal issues are labor economics as it relates to the science, engineering and health workforce (SEHW). Understanding phenomena such as salaries, fringe benefits, mobility and training/job relationships is crucial to maintaining the United States competitiveness in a global economy, as well as to facing the challenges of difficult economic times. The second issue is aging, because other than the role of students born outside of the US, aging is the most important phenomenon taking place in the SEHW (and, arguably, in society as a whole). For both issues, understanding the dramatically increasing richness of observed behaviors within the SEHW is a profound opportunity. New kinds of family structures, shared positions, and an array of forms of post-first-retirement employment are among the central social trends of our times. This project will generate new insights that inform both future research and sound policy.