Entity

Time filter

Source Type

Tucson, AZ, United States

Guo H.,ReliaSoft Corporation | Liao H.,University of Tennessee at Knoxville
IEEE Transactions on Reliability | Year: 2012

Reliability Demonstration Testing (RDT) has been widely used in industry to verify whether a product has met a certain reliability requirement with a stated confidence level. To design RDTs, methods have been developed based on either the number of failures or the failure times. However, practitioners often have difficulty in determining which method to use for a specific design problem. In particular, the method based on the number of failures cannot be used when all the units are tested to failure, while the alternative based on failure times falls short in dealing with cases where no failures are expected. This paper elaborates on the two methods, and compares them from both practical and theoretical standpoints. The detailed discussions regarding the relationship between the two methods will help practitioners design RDTs, and understand when the two methods will lead to similar designs. A Weibull distribution is used in the relevant mathematical derivations, but the results can be extended to other widely used failure time distributions. Case studies are provided to demonstrate the use of the two methods in practice, and in developing equivalent RDT designs. © 2006 IEEE. Source


Lin J.,Lulea University of Technology | Lin J.,Lulea Railway Research Center | Pulido J.,ReliaSoft Corporation | Asplund M.,Lulea University of Technology
Reliability Engineering and System Safety | Year: 2015

This paper undertakes a general reliability study using both classical and Bayesian semi-parametric degradation approaches. The goal is to illustrate how degradation data can be modelled and analysed to flexibly determine reliability to support preventive maintenance strategy making, based on a general data-driven framework. With the proposed classical approach, both accelerated life tests (ALT) and design of experiments (DOE) technology are used to determine how each critical factor affects the prediction of performance. With the Bayesian semi-parametric approach, a piecewise constant hazard regression model is used to establish the lifetime using degradation data. Gamma frailties are included to explore the influence of unobserved covariates within the same group. Ideally, results from the classical and Bayesian approaches will complement each other. To demonstrate these approaches, this paper considers a case study of locomotive wheel-set reliability. The degradation data are prepared by considering an Exponential and a Power degradation path separately. The results show that both classical and Bayesian semi-parametric approaches are useful tools to analyse degradation data and can, therefore, support a company in decision making for preventive maintenance. The approach can be applied to other technical problems (e.g. other industries, other components). © 2014 Elsevier Ltd. All rights reserved. Source


Guo H.,ReliaSoft Corporation | Paynabar K.,University of Michigan | Jin J.,University of Michigan
IIE Transactions (Institute of Industrial Engineers) | Year: 2012

This article proposes a new method to develop multiscale monitoring control charts for an autocorrelated process that has an underlying unknown ARMA(2, 1) model structure. The Haar wavelet transform is used to obtain effective monitoring statistics by considering the process dynamic characteristics in both the time and frequency domains. Three control charts are developed on three selected levels of Haar wavelet coefficients in order to simultaneously detect the changes in the process mean, process variance, and measurement error variance, respectively. A systematic method for automatically determining the optimal monitoring level of Haar wavelet decomposition is proposed that does not require the estimation of an ARMA model. It is shown that the proposed wavelet-based Cumulative SUM (CUSUM) chart on Haar wavelet detail coefficients is only sensitive to the variance changes and robust to process mean shifts. This property provides the separate monitoring capability between a variance change and a mean shift, which shows its advantage by comparison with the traditional CUSUM monitoring chart. For the purpose of mean shift detection, it is also shown that using the proposed wavelet-based Exponentially Weighted Moving Average (EWMA) chart to monitor Haar wavelet scale coefficients will more successfully detect small mean shifts than direct-EWMA charts. © 2012 Copyright Taylor and Francis Group, LLC. Source


Mettas A.,ReliaSoft Corporation
International Journal of Performability Engineering | Year: 2010

Design for Reliability (DFR) is not a new concept, but it has begun to receive a great deal of attention in recent years. What is DFR? What are the ingredients for designing for reliability, and what is involved in implementing DFR? Should DFR be part of a Design for Six Sigma (DFSS) program, and is DFR the same as DFSS? In this paper, we will try to answer these questions and, at the same time, we will propose a general DFR process that can be adopted and deployed with a few modifications across different industries in a way that will fit well into the overall Product Development Process. © RAMS Consultants. Source


Szidarovszky F.,ReliaSoft Corporation | Luo Y.,University of Michigan
Reliability Engineering and System Safety | Year: 2014

Optimal resource allocation is first found in defending possible targets against random terrorist attacks subject to budget constraint. The mathematical model is a nonconvex optimization problem which can be transformed into a convex problem by introducing new decision variables, so standard methods can be used for its solution. Without budget constraint the simplified model can be solved by a very simple algorithm which requires the solution of a single variable monotone equation. © 2013 Elsevier Ltd. Source

Discover hidden collaborations