Biometrics Unit

Broadway, MN, United States

Biometrics Unit

Broadway, MN, United States
Time filter
Source Type

Frair J.L.,SUNY College of Environmental Science and Forestry | Fieberg J.,Biometrics Unit | Hebblewhite M.,University of Montana | Cagnacci F.,Research and Innovation Center | And 2 more authors.
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2010

Global positioning system (GPS) technologies collect unprecedented volumes of animal location data, providing ever greater insight into animal behaviour. Despite a certain degree of inherent imprecision and bias in GPS locations, little synthesis regarding the predominant causes of these errors, their implications for ecological analysis or solutions exists. Terrestrial deployments report 37 per cent or less non-random data loss and location precision 30 m or less on average, with canopy closure having the predominant effect, and animal behaviour interacting with local habitat conditions to affect errors in unpredictable ways. Home-range estimates appear generally robust to contemporary levels of location imprecision and bias, whereas movement paths and inferences of habitat selection may readily become misleading. There is a critical need for greater understanding of the additive or compounding effects of location imprecision, fix-rate bias, and, in the case of resource selection, map error on ecological insights. Technological advances will help, but at present analysts have a suite of ad hoc statistical corrections and modelling approaches available - tools that vary greatly in analytical complexity and utility. The success of these solutions depends critically on understanding the error-inducing mechanisms, and the biggest gap in our current understanding involves species-specific behavioural effects on GPS performance. © 2010 The Royal Society.

Matthiopoulos J.,University of St. Andrews | Hebblewhite M.,University of Montana | Aarts G.,Netherlands Institute for Sea Research | Aarts G.,Institute for Marine Resources and Ecosystem Studies | Fieberg J.,Biometrics Unit
Ecology | Year: 2011

Researchers employing resource selection functions (RSFs) and other related methods aim to detect correlates of space-use and mitigate against detrimental environmental change. However, an empirical model fit to data from one place or time is unlikely to capture species responses under different conditions because organisms respond nonlinearly to changes in habitat availability. This phenomenon, known as a functional response in resource selection, has been debated extensively in the RSF literature but continues to be ignored by practitioners for lack of a practical treatment. We therefore extend the RSF approach to enable it to estimate generalized functional responses (GFRs) from spatial data. GFRs employ data from several sampling instances characterized by diverse profiles of habitat availability. By modeling the regression coefficients of the underlying RSF as functions of availability, GFRs can account for environmental change and thus predict population distributions in new environments. We formulate the approach as a mixed-effects model so that it is estimable by readily available statistical software. We illustrate its application using (1) simulation and (2) wolf home-range telemetry. Our results indicate that GFRs can offer considerable improvements in estimation speed and predictive ability over existing mixed-effects approaches. © 2011 by the Ecological Society of America.

Fieberg J.,Biometrics Unit | Matthiopoulos J.,University of St. Andrews | Hebblewhite M.,University of Montana | Boyce M.S.,University of Alberta | Frair J.L.,New York University
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2010

With the advent of new technologies, animal locations are being collected at ever finer spatiotemporal scales. We review analytical methods for dealing with correlated data in the context of resource selection, including post hoc variance inflation techniques, 'two-stage' approaches based on models fit to each individual, generalized estimating equations and hierarchical mixed-effects models. These methods are applicable to a wide range of correlated data problems, but can be difficult to apply and remain especially challenging for use-availability sampling designs because the correlation structure for combinations of used and available points are not likely to follow common parametric forms. We also review emerging approaches to studying habitat selection that use finescale temporal data to arrive at biologically based definitions of available habitat, while naturally accounting for autocorrelation by modelling animal movement between telemetry locations. Sophisticated analyses that explicitly model correlation rather than consider it a nuisance, like mixed effects and state-space models, offer potentially novel insights into the process of resource selection, but additional work is needed to make them more generally applicable to large datasets based on the use-availability designs. Until then, variance inflation techniques and two-stage approaches should offer pragmatic and flexible approaches to modelling correlated data. © 2010 The Royal Society.

Rossi S.,V Foundation | Ravetta V.,V Foundation | Rosa L.,V Foundation | Ghittoni G.,V Foundation | And 7 more authors.
Hepatology | Year: 2011

In most patients with cirrhosis, successful percutaneous ablation or surgical resection of hepatocellular carcinoma (HCC) is followed by recurrence. Radiofrequency ablation (RFA) has proven effective for treating HCC nodules, but its repeatability in managing recurrences and the impact of this approach on survival has not been evaluated. To this end, we retrospectively analyzed a prospective series of 706 patients with cirrhosis (Child-Pugh class ≤B7) who underwent RFA for 859 HCC ≤35 mm in diameter (1-2 per patient). The results of RFA were classified as complete responses (CRs) or treatment failures. CRs were obtained in 849 nodules (98.8%) and 696 patients (98.5%). During follow-up (median, 29 months), 465 (66.8%) of the 696 patients with CRs experienced a first recurrence at an incidence rate of 41 per 100 person-years (local recurrence 6.2; nonlocal 35). Cumulative incidences of first recurrence at 3 and 5 years were 70.8% and 81.7%, respectively. RFA was repeated in 323 (69.4%) of the 465 patients with first recurrence, restoring disease-free status in 318 (98.4%) cases. Subsequently, RFA was repeated in 147 (65.9%) of the 223 patients who developed a second recurrence after CR of the first, restoring disease-free status in 145 (98.6%) cases. Overall, there were 877 episodes of recurrence (1-8 per patient); 577 (65.8%) of these underwent RFA that achieved CRs in 557 (96.5%) cases. No procedure-related deaths occurred in 1,921 RFA sessions. Estimated 3- and 5-year overall and disease-free (after repeated RFAs) survival rates were 67.0% and 40.1% and 68.0 and 38.0%, respectively. Conclusion: RFA is safe and effective for managing HCC in patients with cirrhosis, and its high repeatability makes it particularly valuable for controlling intrahepatic recurrences. © 2010 American Association for the Study of Liver Diseases.

Giudice J.H.,Biometrics Unit | Fieberg J.R.,Biometrics Unit | Lenarz M.S.,Grand Rapids
Journal of Wildlife Management | Year: 2012

Sightability models are binary logistic-regression models used to estimate and adjust for visibility bias in wildlife-population surveys. Like many models in wildlife and ecology, sightability models are typically developed from small observational datasets with many candidate predictors. Aggressive model-selection methods are often employed to choose a best model for prediction and effect estimation, despite evidence that such methods can lead to overfitting (i.e., selected models may describe random error or noise rather than true predictor-response curves) and poor predictive ability. We used moose (Alces alces) sightability data from northeastern Minnesota (2005-2007) as a case study to illustrate an alternative approach, which we refer to as degrees-of-freedom (df) spending: sample-size guidelines are used to determine an acceptable level of model complexity and then a pre-specified model is fit to the data and used for inference. For comparison, we also constructed sightability models using Akaike's Information Criterion (AIC) step-down procedures and model averaging (based on a small set of models developed using df-spending guidelines). We used bootstrap procedures to mimic the process of model fitting and prediction, and to compute an index of overfitting, expected predictive accuracy, and model-selection uncertainty. The index of overfitting increased 13% when the number of candidate predictors was increased from three to eight and a best model was selected using step-down procedures. Likewise, model-selection uncertainty increased when the number of candidate predictors increased. Model averaging (based on R = 30 models with 1-3 predictors) effectively shrunk regression coefficients toward zero and produced similar estimates of precision to our 3-df pre-specified model. As such, model averaging may help to guard against overfitting when too many predictors are considered (relative to available sample size). The set of candidate models will influence the extent to which coefficients are shrunk toward zero, which has implications for how one might apply model averaging to problems traditionally approached using variable-selection methods. We often recommend the df-spending approach in our consulting work because it is easy to implement and it naturally forces investigators to think carefully about their models and predictors. Nonetheless, similar concepts should apply whether one is fitting 1 model or using multi-model inference. For example, model-building decisions should consider the effective sample size, and potential predictors should be screened (without looking at their relationship to the response) for missing data, narrow distributions, collinearity, potentially overly influential observations, and measurement errors (e.g., via logical error checks). © 2011 The Wildlife Society. Copyright © The Wildlife Society, 2011.

Aarts G.,IMARES Wageningen UR | Aarts G.,Wageningen University | Fieberg J.,Biometrics Unit | Fieberg J.,University of Minnesota | And 2 more authors.
Journal of Animal Ecology | Year: 2013

If animals moved randomly in space, the use of different habitats would be proportional to their availability. Hence, deviations from proportionality between use and availability are considered the tell-tale sign of preference. This principle forms the basis for most habitat selection and species distribution models fitted to use-availability or count data (e.g. MaxEnt and Resource Selection Functions). Yet, once an essential habitat type is sufficiently abundant to meet an individual's needs, increased availability of this habitat type may lead to a decrease in the use/availability ratio. Accordingly, habitat selection functions may estimate negative coefficients when habitats are superabundant, incorrectly suggesting an apparent avoidance. Furthermore, not accounting for the effects of availability on habitat use may lead to poor predictions, particularly when applied to habitats that differ considerably from those for which data have been collected. Using simulations, we show that habitat use varies non-linearly with habitat availability, even when individuals follow simple movement rules to acquire food and avoid risk. The results show that the impact of availability strongly depends on the type of habitat (e.g. whether it is essential or substitutable) and how it interacts with the distribution and availability of other habitats. We demonstrate the utility of a variety of existing and new methods that enable the influence of habitat availability to be explicitly estimated. Models that allow for non-linear effects (using b-spline smoothers) and interactions between environmental covariates defining habitats and measures of their availability were best able to capture simulated patterns of habitat use across a range of environments. An appealing aspect of some of the methods we discuss is that the relative influence of availability is not defined a priori, but directly estimated by the model. This feature is likely to improve model prediction, hint at the mechanism of habitat selection, and may signpost habitats that are critical for the organism's fitness. © 2013 British Ecological Society.

Fieberg J.,University of Minnesota | Fieberg J.,Biometrics Unit | Ditmer M.,University of Minnesota
Methods in Ecology and Evolution | Year: 2012

1. New technologies have made it possible to simultaneously, and remotely, collect time series of animal location data along with indicators of individuals' physiological condition. These data, along with animal movement models that incorporate individual physiological and behavioural states, promise to offer new insights into determinants of animal behaviour. Care must be taken, however, when attempting to infer causal relationships from biotelemetry data. The possibility of unmeasured confounders, responsible for driving both physiological measurements and animal movement, must be considered. Further, response values (yt) may be predictive of future covariate values (xt+s,s≥1). When this occurs, the covariate process is said to be endogenous with respect to the response variable, which has implications for both choosing statistical estimation targets and also estimators of these quantities. 2. We explore models that attempt to relate xt  =  log(daily movement rate) to yt  =  log(average daily heart rate) using data collected from a black bear (Ursus americanus) population in Minnesota. The regression parameter for xt was 0·19 and statistically different from 0 (P < 0·001) when daily measurements were assumed to be independent, but residuals were highly autocorrelated. Assuming an autoregressive model (ar(1)) for the residuals, however, resulted in a negative slope estimate (-0·001) that was not statistically different from 0. 3. The sensitivity of regression parameters to the assumed error structure can be explained by exploring relationships between lagged and current values of x and y and between parameters in the independence and ar(1) models. We hypothesize that an unmeasured confounder may be responsible for the behaviour of the regression parameters. In addition, measurement error associated with daily movement rates may also play a role. 4. Similar issues often arise in epidemiological, biostatistical and econometrics applications; directed acyclical graphs, representing causal pathways, are central to understanding potential problems (and their solutions) associated with modelling time-dependent covariates. In addition, we suggest that incorporating lagged responses and lagged predictors as covariates may prove useful for diagnosing when and explaining why some conclusions are sensitive to model assumptions. © 2012 The Authors. Methods in Ecology and Evolution © 2012 British Ecological Society.

Aarts G.,Institute for Marine Resources and Ecosystem Studies | Aarts G.,Netherlands Institute for Sea Research | Fieberg J.,Biometrics Unit | Matthiopoulos J.,University of St. Andrews
Methods in Ecology and Evolution | Year: 2012

The need to understand the processes shaping population distributions has resulted in a vast increase in the diversity of spatial wildlife data, leading to the development of many novel analytical techniques that are fit-for-purpose. One may aggregate location data into spatial units (e.g. grid cells) and model the resulting counts or presence-absences as a function of environmental covariates. Alternatively, the point data may be modelled directly, by combining the individual observations with a set of random or regular points reflecting habitat availability, a method known as a use-availability design (or, alternatively a presence - pseudo-absence or case-control design). Although these spatial point, count and presence-absence methods are widely used, the ecological literature is not explicit about their connections and how their parameter estimates and predictions should be interpreted. The objective of this study is to recapitulate some recent statistical results and illustrate that under certain assumptions, each method can be motivated by the same underlying spatial inhomogeneous Poisson point process (IPP) model in which the intensity function is modelled as a log-linear function of covariates. The Poisson likelihood used for count data is a discrete approximation of the IPP likelihood. Similarly, the presence-absence design will approximate the IPP likelihood, but only when spatial units (i.e. pixels) are extremely small (Electric Journal of Statistics, 2010, 4, 1151-1201). For larger pixel sizes, presence-absence designs do not differentiate between one or multiple observations within each pixel, hence leading to information loss. Logistic regression is often used to estimate the parameters of the IPP model using point data. Although the response variable is defined as 0 for the availability points, these zeros do not serve as true absences as is often assumed; rather, their role is to approximate the integral of the denominator in the IPP likelihood (The Annals of Applied Statistics, 2010, 4, 1383-1402). Because of this common misconception, the estimated exponential function of the linear predictor (i.e. the resource selection function) is often assumed to be proportional to occupancy. Like IPP and count models, this function is proportional to the expected density of observations. Understanding these (dis-)similarities between different species distribution modelling techniques should improve biological interpretation of spatial models and therefore advance ecological and methodological cross-fertilization. © 2011 The Authors. Methods in Ecology and Evolution © 2011 British Ecological Society.

Meng S.X.,Biometrics Unit | Huang S.,Biometrics Unit
Canadian Journal of Forest Research | Year: 2010

Three nonlinear mixed models with and without incorporating a function to model the serial correlation were compared with regard to their predictive abilities. Results showed that accounting for the serial correlation using the spatial power (SP(POW)) or Toeplitz (TOEP(X)) functions resulted in a large reduction in serial correlation and improved the fit of the models. The improved model fits, however, did not unanimously translate into improved model predictions when evaluated under different scenarios. In many cases, the models with the simple independent and identically distributed structure outperformed the models with the SP(POW) or TOEP(X) structure in terms of the models' predictive ability. We also examined the effect of adjusting predictions based on the prediction theorem within the nonlinear mixed modeling framework. It was shown that, in general, the adjusted predictions had lower errors than those without adjustment, but the differences were small in many cases. The adjustment with three prior measurements was better in predictions than the adjustment with only one or two prior measurements for the models with the TOEP(X) structure, but not for SP(POW). A theoretical derivation was developed to prove the insensitivity of the models with the SP(POW) structure to the number of prior measurements. The implications of accounting for serial correlation on model inference and model predictions were discussed.

Yang Y.,Biometrics Unit | Huang S.,Biometrics Unit
Canadian Journal of Forest Research | Year: 2011

Nonlinear mixed models have become popular in forestry applications, and various methods have been proposed for fitting such models. However, it is difficult or even confusing to choose which method to use, and there is not much relevant information available, especially in the forestry context. The main objective of this study was to compare three commonly used methods for fitting nonlinear mixed models: the first-order, the first-order conditional expectation, and the adaptive Gaussian quadrature methods. Both the maximum likelihood and restricted maximum likelihood parameter estimation techniques were evaluated. Three types of data common in forestry were used for model fitting and model application. It was found that the first-order conditional expectation method provided more accurate and precise predictions for two models developed from data with more observations per subject. For one model developed on data with fewer observations per subject, the first-order method provided better model predictions. All three models fitted by the first-order method produced some biologically unrealistic predictions, and the problem was more obvious on the data with fewer observations per subject. For all three models fitted by the first-order and first-order conditional expectation methods, the maximum likelihood and restricted maximum likelihood fits and the resulting model predictions were very close.

Loading Biometrics Unit collaborators
Loading Biometrics Unit collaborators