Entity

Time filter

Source Type

Lakewood, CO, United States

Lloyd A.H.,Middlebury College | Duffy P.A.,Neptune and Company | Mann D.H.,University of Alaska Fairbanks
Canadian Journal of Forest Research | Year: 2013

Ongoing warming at high latitudes is expected to lead to large changes in the structure and function of boreal forests. Our objective in this research is to determine the climatic controls over the growth of white spruce (Picea glauca (Moench) Voss) at the warmest driest margins of its range in interior Alaska. We then use those relationships to determine the climate variables most likely to limit future growth. We collected tree cores from white spruce trees growing on steep, south-facing river bluffs at five sites in interior Alaska, and analyzed the relationship between ring widths and climate using boosted regression trees. Precipitation and temperature of the previous growing season are important controls over growth at most sites: trees grow best in the coolest, wettest years. We identify clear thresholds in growth response to a number of variables, including both temperature and precipitation variables. General circulation model (GCM) projections of future climate in this region suggest that optimum climatic conditions for white spruce growth will become increasingly rare in the future. This is likely to cause short-term declines in productivity and, over the longer term, probably lead to a contraction of white spruce to the cooler, moister parts of its range in Alaska. Source


Ringham B.M.,University of Colorado at Denver | Kreidler S.M.,Neptune and Company | Muller K.E.,University of Florida | Glueck D.H.,University of Colorado at Denver
Statistics in Medicine | Year: 2016

Multilevel and longitudinal studies are frequently subject to missing data. For example, biomarker studies for oral cancer may involve multiple assays for each participant. Assays may fail, resulting in missing data values that can be assumed to be missing completely at random. Catellier and Muller proposed a data analytic technique to account for data missing at random in multilevel and longitudinal studies. They suggested modifying the degrees of freedom for both the Hotelling–Lawley trace F statistic and its null case reference distribution. We propose parallel adjustments to approximate power for this multivariate test in studies with missing data. The power approximations use a modified non-central F statistic, which is a function of (i) the expected number of complete cases, (ii) the expected number of non-missing pairs of responses, or (iii) the trimmed sample size, which is the planned sample size reduced by the anticipated proportion of missing data. The accuracy of the method is assessed by comparing the theoretical results to the Monte Carlo simulated power for the Catellier and Muller multivariate test. Over all experimental conditions, the closest approximation to the empirical power of the Catellier and Muller multivariate test is obtained by adjusting power calculations with the expected number of complete cases. The utility of the method is demonstrated with a multivariate power analysis for a hypothetical oral cancer biomarkers study. We describe how to implement the method using standard, commercially available software products and give example code. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd. Source


Johnson J.L.,University of North Carolina at Chapel Hill | Kreidler S.M.,Neptune and Company | Catellier D.J.,Rti International | Murray D.M.,U.S. National Institutes of Health | And 2 more authors.
Statistics in Medicine | Year: 2015

We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. © 2015 John Wiley & Sons, Ltd. Source


Brenner D.,Neptune and Company
Journal of the Air and Waste Management Association | Year: 2010

Most of the published empirical data on indoor air concentrations resulting from vapor intrusion of contaminants from underlying groundwater are for residential structures. The National Aeronautics and Space Administration (NASA) Research Park site, located in Moffett Field, CA, and comprised of 213 acres, is being planned for redevelopment as a collaborative research and educational campus with associated facilities. Groundwater contaminated with hydrocarbon and halogenated hydrocarbon volatile organic compounds (VOCs) is the primary environmental medium of concern at the site. Over a 15-month period, approximately 1000 indoor, outdoor ambient, and outdoor ambient background samples were collected from four buildings designated as historical landmarks using Summa canisters and analyzed by the U.S. Environmental Protection Agency TO-15 selective ion mode. Both 24-hr and sequential 8-hr samples were collected. Comparison of daily sampling results relative to daily background results indicates that the measured trichloroethylene (TCE) concentrations were primarily due to the subsurface vapor intrusion pathway, although there is likely some contribution due to infiltration of TCE from the outdoor ambient background concentrations. Analysis of the cis-1,2-dichloroethylene concentrations relative to TCE concentrations with respect to indoor air concentrations and the background air support this hypothesis; however, this indicates that relative contributions of the vapor intrusion and infiltration pathways vary with each building. Indoor TCE concentrations were also compared with indoor benzene and background benzene concentrations. These data indicate significant correlation between background benzene concentrations and the concentration of benzene in the indoor air, indicating benzene was present in the indoor air primarily through infiltration of outdoor air into the indoor space. By comparison, measured TCE indoor air concentrations showed a significantly different relationship to background concentrations. Analysis of the results show that indoor air samples can be used to definitively define the source of the TCE present in the indoor air space of large industrial buildings. Copyright 2010 Air & Waste Management Association. Source


Loescher H.,Science Team | Loescher H.,University of Colorado at Boulder | Ayres E.,Science Team | Ayres E.,University of Colorado at Boulder | And 5 more authors.
PLoS ONE | Year: 2014

Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/ biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and subtropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10x more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12- dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. © 2014 Loescher et al. Source

Discover hidden collaborations