Entity

Time filter

Source Type

Cambridge, United Kingdom

Bird S.M.,MRC Biostatistics Unit
Addiction Research and Theory | Year: 2010

Heroin users/injectors' risk of drugs-related death by sex and current age is weakly estimated both in individual cohorts of under 1000 clients, 5000 person-years or 50 drugs-related deaths and when using cross-sectional data. A workshop in Cambridge analysed six cohorts who were recruited according to a common European Monitoring Centre for Drugs and Drug Addiction (EMCDDA) protocol from drug treatment agencies in Barcelona, Denmark, Dublin, Lisbon, Rome and Vienna in the 1990s; and, as external reference, opiate-user arrestees in France and Hepatitis C diagnosed ever-injectors in Scotland in 19932001, both followed by database linkage to December 2001. EMCDDA cohorts recorded approximately equal numbers of drugs-related deaths (864) and deaths from other non-HIV causes (865) during 106,152 person-years of follow-up. External cohorts contributed 376 drugs-related deaths (Scotland 195, France 181) and 418 deaths from non-HIV causes (Scotland 221, France 197) during 86,417 person-years of follow-up (Scotland 22, 670, France 63, 747). EMCDDA cohorts reported 707 drugs-related deaths in 81,367 man-years (8.7 per 1000 person-years, 95 CI: 8.19.4) but only 157 in 24,785 person-years for females (6.3 per 1000 person-years, 95 CI: 5.47.4). Except in external cohorts, relative risks by current age group were not particularly strong, and more modest in Poisson regression than in cross-sectional analyses: relative risk was 1.2 (95 CI: 1.01.4) for 3544 year olds compared to 1524 year olds, but 1.4 for males (95 CI: 1.21.6), and dramatically lower at 0.44 after the first year of follow-up (95 CI: 0.370.52). © 2010 Informa UK Ltd All rights reserved: reproduction in whole or part not permitted. Source


Keogh R.,London School of Hygiene and Tropical Medicine | White I.,MRC Biostatistics Unit
Statistics in Medicine | Year: 2014

Exposure measurement error is a problem in many epidemiological studies, including those using biomarkers and measures of dietary intake. Measurement error typically results in biased estimates of exposure-disease associations, the severity and nature of the bias depending on the form of the error. To correct for the effects of measurement error, information additional to the main study data is required. Ideally, this is a validation sample in which the true exposure is observed. However, in many situations, it is not feasible to observe the true exposure, but there may be available one or more repeated exposure measurements, for example, blood pressure or dietary intake recorded at two time points. The aim of this paper is to provide a toolkit for measurement error correction using repeated measurements. We bring together methods covering classical measurement error and several departures from classical error: systematic, heteroscedastic and differential error. The correction methods considered are regression calibration, which is already widely used in the classical error setting, and moment reconstruction and multiple imputation, which are newer approaches with the ability to handle differential error. We emphasize practical application of the methods in nutritional epidemiology and other fields. We primarily consider continuous exposures in the exposure-outcome model, but we also outline methods for use when continuous exposures are categorized. The methods are illustrated using the data from a study of the association between fibre intake and colorectal cancer, where fibre intake is measured using a diet diary and repeated measures are available for a subset. © 2014 The Authors. Source


Bowden J.,MRC Biostatistics Unit | Vansteelandt S.,Ghent University
Statistics in Medicine | Year: 2011

'Instrumental Variable' (IV) methods provide a basis for estimating an exposure's causal effect on the risk of disease. In Mendelian randomization studies, where genetic information plays the role of the IV, IV analyses are routinely performed on case-control data, rather than prospectively collected observational data. Although it is a well-appreciated fact that ascertainment bias may invalidate such analyses, ad hoc assumptions and approximations are made to justify their use. In this paper we attempt to explain and clarify why they may fail and show how they can be adjusted for improved performance. In particular, we propose consistent estimators of the causal relative risk and odds ratio if a priori knowledge is available regarding either the population disease prevalence or the population distribution of the IV (e.g. population allele frequencies). We further show that if no such information is available, approximate estimators can be obtained under a rare disease assumption. We illustrate this with matched case-control data from the recently completed EPIC study, from which we attempt to assess the evidence for a causal relationship between C-reactive protein levels and the risk of Coronary Artery Disease. © 2010 John Wiley & Sons, Ltd. Source


Grant
Agency: GTR | Branch: MRC | Program: | Phase: Intramural | Award Amount: 28.47K | Year: 2015

Clinical trials are used to test the effectiveness and safety of new treatments. In recent years they have become more and more expensive and have high rates of failure. Methods for reducing the cost of clinical trials and improving the chances of finding the truth are important. New technology means that new approaches to clinical trials are needed. This programme will conduct research into statistical methods for improving clinical trials. The programme will cover three main areas. The first is research for including the large amount of biological information that can be measured for patients. The purpose of this is that different patients may benefit more from a treatment than others: we would like to be able to find this if it is true. The second is to explore how clinical trials can be more efficient by allowing new treatments to be added as they go along. This will reduce the cost and provide opportunities to ensure patients on a trial get the most suitable treatment for them. A third area is to improve the statistical analysis of trials which use complicated ‘composite endpoints’ which measure several things at once. This reduces the cost of clinical trials. By working with doctors as well as statisticians we will make sure that this research is used in real clinical trials to benefit patients and decision-makers.


Grant
Agency: GTR | Branch: MRC | Program: | Phase: Intramural | Award Amount: 286.05K | Year: 2014

Every person is biologically unique: even individuals who are superficially similar may show differences at the genetic and biochemical level. This diversity has real implications for medicine, and helps to explain why apparently similar patients often respond very differently to the same therapies. However, we remain limited in our ability to account for such diversity in the clinic. “Precision medicine” refers to the emerging idea of using molecular measurements (such as gene sequences or protein levels) to match individual patients to the therapies from which they are most likely to benefit. Machine learning (ML) is a field that combines ideas from computer science, mathematics and statistics to find patterns in data. Our research explores how we can exploit the power of statistics and ML to enable precision medicine. Our long-term goal is to develop methods that can exploit large datasets to provide doctors information that can help in treating patients. We work on the mathematical and computational side, inventing new ways to look at complex data that can tease out relevant patterns, and work closely with biomedical researchers in the UK, US and Europe.

Discover hidden collaborations