Entity

Time filter

Source Type

San Diego, CA, United States

Hyun S.-Y.,University of Massachusetts Dartmouth | Maunder M.N.,Quantitative Resource Assessment LLC | Rothschild B.J.,University of Massachusetts Dartmouth
ICES Journal of Marine Science | Year: 2014

Many fish stock assessments use a survey index and assume a stochastic error in the index on which a likelihood function of associated parameters is built and optimized for the parameter estimation. The purpose of this paper is to evaluate the assumption that the standard deviation for the difference in the log-transformed index is approximately equal to the coefficient of variation of the index, and also to examine the homo- and heteroscedasticity of the errors. The traditional practice is to assume a common variance of the index errors over time for estimation convenience. However, if additional information is available about year-to-year variability in the errors, such as year-to-year coefficient of variation, then we suggest that the heteroscedasticity assumption should be considered. We examined five methods with the assumption of a multiplicative error in the survey index and two methods with that of an additive error in the index: M1, homoscedasticity in the multiplicative error model; M2, heteroscedasticity in the multiplicative error model; M3, M2 with approximate weighting and an additional parameter for scaling variance; M4-M5, pragmatic practices; M6, homoscedasticity in the additive error model; M7, heteroscedasticity in the additive error model. M1-M2 and M6-M7 are strictly based on statistical theories, whereas M3-M5 are not. Heteroscedasticity methods M2, M3, and M7 consistently outperformed the other methods. However, we select M2 as the best method. M3 requires one more parameter than M2. M7 has problems arising from the use of the raw scale as opposed to the logarithm transformation. Furthermore, the fitted survey index in M7 can be negative although its domain is positive. © 2014 © International Council for the Exploration of the Sea 2014. All rights reserved. Source


The stock-recruitment relationship is one of the most uncertain processes of fish population dynamics, and is highly influential with respect to fisheries management advice. The stock recruitment relationship has a direct impact on reference points commonly used in contemporary fisheries management. Simulation analysis has shown that the steepness of the Beverton-Holt stock-recruitment relationship is difficult to estimate for most fish stocks, which has led to the use of proxy reference points. Proxy maximum sustainable yield reference points based on spawning biomass-per-recruit, which are commonly used when the stock-recruitment relationship is uncertain, are a linear function of steepness. Risk in terms of lost yield is generally lower when steepness is underestimated compared to when steepness is overestimated because the yield curve is flat when steepness is high (close to one: recruitment is independent of stock size), indicating that using a lower value of steepness might be appropriate. Simulation analysis based on data for summer flounder in the US mid-Atlantic indicates that steepness can be estimated from the data. Steepness is estimated to be close to one and a high steepness is supported by estimates for related species and from life history theory. Current target (F 35%) and threshold (F 40%) spawning biomass-per-recruit reference points used for summer flounder imply steepness values of 0.73 and 0.66, respectively, for the Beverton-Holt stock-recruitment relationship. © 2012 Elsevier B.V. Source


Maunder M.N.,Quantitative Resource Assessment LLC | Deriso R.B.,Inter American Tropical Tuna Commission
Canadian Journal of Fisheries and Aquatic Sciences | Year: 2011

Multiple factors acting on different life stages influence population dynamics and complicate the assessment and management of populations. To provide appropriate management advice, the data should be used to determine which factors are important and what life stages they impact. It is also important to consider density dependence because it can modify the impact of some factors. We develop a state-space multistage life cycle model that allows for density dependence and environmental factors to impact different life stages. Models are ranked using a two-covariates-at-a-time stepwise procedure based on AICc model averaging to reduce the possibility of excluding factors that are detectable in combination, but not alone. Impact analysis is used to evaluate the impact of factors on the population. The framework is illustrated by application to delta smelt (Hyposmesus transpacificus), a threatened species that is potentially impacted by multiple anthropogenic factors. Our results indicate that density dependence and a few key factors impact the delta smelt population. Temperature, prey, and predators dominated the factors supported by the data and operated on different life stages. The included factors explain the recent declines in delta smelt abundance and may provide insight into the cause of the pelagic species decline in the San Francisco Estuary. Source


Maunder M.N.,Quantitative Resource Assessment LLC | Deriso R.B.,Inter American Tropical Tuna Commission | Hanson C.H.,and Hanson Inc.
Fisheries Research | Year: 2015

Factors impacting the survival of individuals between two life stages have traditionally been evaluated using log-linear regression of the ratio of abundance estimates for the two stages. These analyses require simplifying assumptions that may impact the results of hypothesis tests and subsequent conclusions about the factors impacting survival. Modern statistical methods can reduce the dependence of analyses on these simplifying assumptions. State-space models and the related concept of random effects allow the modeling of both process and observation error. Nonlinear models and associated estimation techniques allow for flexibility in the system model, including density dependence, and in error structure. Population dynamics models link information from one stage to the next and over multiple time periods and automatically accommodate missing observations. We investigate the impact of observation error, density dependence, population dynamics, and data for multiple stages on hypothesis testing using data for longfin smelt in the San Francisco Bay-Delta. © 2014 The Authors. Source


Catch-at-age (or catch-at-length) data are one of the major components of most modern statistical stock assessment methods. Catch-at-age data provide, among other things, information about gear selectivity and recruitment strength. Catch-at-age data can also have a large influence on the estimates of fishing mortality, absolute abundance, and trends in abundance. The multinomial distribution describes the theoretical sampling process that is used to collect catch-at-age data, but only under the assumption of random sampling. Sampling designs generally employed to collect fishery-related data lead to age-composition estimates that depart from the strict theoretical multinomial probability distribution. Lack of independence can, for example, be due to size- or age-specific schooling or aggregating, causing positive correlations among individuals, and overdispersion. An additional cause of inadequacy of the multinomial assumption is model misspecification. Therefore, the effective sample size that should be used in an assessment model can be much smaller than the actual sample size. This can cause inappropriate weighting among data sets and negatively biased estimates of uncertainty. I use simulation analysis to evaluate five methods to estimate the effective sample size for catch-at-age data: (1) iterative multinomial likelihood; (2) normal approximation, using binomial variance; (3) lognormal likelihood with variance proportional to the inverse of the proportion; (4) Dirichlet likelihood; and (5) a multivariate normal approximation. The results show that all five methods perform similarly, but do not reduce estimation error relative to using the actual sample size unless the effective sample size is about one fifth of the actual sample size. All but (4) produced positively biased estimates of the effective sample size. If the effective sample size is not known within half an order of magnitude, I recommend using the lognormal likelihood with variance proportional to the inverse of the proportion and a regression against the actual sample size. This method is less computationally intense than (1), more robust than (2) and (4), and produces the least biased estimates of effective sample size, except for (4). Unlike (1), it can be included in Bayesian analysis. © 2011 Elsevier B.V. Source

Discover hidden collaborations