Collaborative for Research on Outcomes and Metrics

Anderson, United States

Collaborative for Research on Outcomes and Metrics

Anderson, United States
Time filter
Source Type

Tractenberg R.E.,Georgetown University | Tractenberg R.E.,Collaborative for Research on Outcomes and Metrics | Yumoto F.,Collaborative for Research on Outcomes and Metrics | Yumoto F.,University of Maryland University College | And 3 more authors.
PLoS ONE | Year: 2012

We used a Guttman model to represent responses to test items over time as an approximation of what is often referred to as "points lost" in studies of cognitive decline or interventions. To capture this meaning of "point loss", over four successive assessments, we assumed that once an item is incorrect, it cannot be correct at a later visit. If the loss of a point represents actual decline, then failure of an item to fit the Guttman model over time can be considered measurement error. This representation and definition of measurement error also permits testing the hypotheses that measurement error is constant for items in a test, and that error is independent of "true score", which are two key consequences of the definition of "measurement error" -and thereby, reliability- under Classical Test Theory. We tested the hypotheses by fitting our model to, and comparing our results from, four consecutive annual evaluations in three groups of elderly persons: a) cognitively normal (NC, N = 149); b) diagnosed with possible or probable AD (N = 78); and c) cognitively normal initially and a later diagnosis of AD (converters, N = 133). Of 16 items that converged, error-free measurement of "cognitive loss" was observed for 10 items in NC, eight in converters, and two in AD. We found that measurement error, as we defined it, was inconsistent over time and across cognitive functioning levels, violating the theory underlying reliability and other psychometric characteristics, and key regression assumptions. © 2012 Tractenberg et al.

Tractenberg R.E.,Georgetown University | Tractenberg R.E.,Collaborative for Research on Outcomes and Metrics | Pietrzak R.H.,Yale University | Pietrzak R.H.,Collaborative for Research on Outcomes and Metrics
PLoS ONE | Year: 2011

Background/Aims: To explore different definitions of intra-individual variability (IIV) to summarize performance on commonly utilized cognitive tests (Mini Mental State Exam; Clock Drawing Test); compare them and their potential to differentiate clinically-defined populations; and to examine their utility in predicting clinical change in individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Methods: Sample statistics were computed from ADNI cohorts with no cognitive diagnosis, a diagnosis of mild cognitive impairment (MCI), and a diagnosis of possible or probable Alzheimer's disease (AD). Nine different definitions of IIV were computed for each sample, and standardized effect sizes (Cohen's d) were computed for each of these definitions in 500 simulated replicates using scores on the Mini Mental State Exam and Clock Drawing Test. IIV was computed based on test items separately ('within test' IIV) and the two tests together ('across test' IIV). The best performing definition was then used to compute IIV for a third test, the Alzheimer's Disease Assessment Scale-Cognitive, and the simulations and effect sizes were again computed. All effect size estimates based on simulated data were compared to those computed based on the total scores in the observed data. Association between total score and IIV summaries of the tests and the Clinician's Dementia Rating were estimated to test the utility of IIV in predicting clinically meaningful changes in the cohorts over 12- and 24-month intervals. Results: ES estimates differed substantially depending on the definition of IIV and the test(s) on which IIV was based. IIV (coefficient of variation) summaries of MMSE and Clock-Drawing performed similarly to their total scores, the ADAS total performed better than its IIV summary. Conclusion: IIV can be computed within (items) or across (totals) items on commonly-utilized cognitive tests, and may provide a useful additional summary measure of neuropsychological test performance. © 2011 Tractenberg, Pietrzak.

Barnes L.L.,Rush University Medical Center | Yumoto F.,Collaborative for Research on Outcomes and Metrics | Capuano A.,Rush University Medical Center | Wilson R.S.,Rush University Medical Center | And 3 more authors.
Journal of the International Neuropsychological Society | Year: 2015

Older African Americans tend to perform more poorly on cognitive function tests than older Whites. One possible explanation for their poorer performance is that the tests used to assess cognition may not reflect the same construct in African Americans and Whites. Therefore, we tested measurement invariance, by race and over time, of a structured 18-test cognitive battery used in three epidemiologic cohort studies of diverse older adults. Multi-group confirmatory factor analyses were carried out with full-information maximum likelihood estimation in all models to capture as much information as was present in the observed data. Four different aspects of the data were fit to each model: comparative fit index (CFI), standardized root mean square residuals (SRMR), root mean square error of approximation (RMSEA), and model . We found that the most constrained model fit the data well (CFI=0.950; SRMR=0.051; RMSEA=0.057 (90% confidence interval: 0.056, 0.059); the model =4600.68 on 862 df), supporting the characterization of this model of cognitive test scores as invariant over time and racial group. These results support the conclusion that the cognitive test battery used in the three studies is invariant across race and time and can be used to assess cognition among African Americans and Whites in longitudinal studies. Furthermore, the lower performance of African Americans on these tests is not due to bias in the tests themselves but rather likely reflect differences in social and environmental experiences over the life course. (JINS, 2016, 22, 66-75) Copyright © The International Neuropsychological Society 2015.

PubMed | Georgetown University, University of California at San Diego and Collaborative for Research on Outcomes and Metrics
Type: Journal Article | Journal: Open journal of philosophy | Year: 2015

To demonstrate challenges in the estimation of change in quality of life (QOL).Data were taken from a completed clinical trial with negative results. Responses to 13 QOL items were obtained 12 months apart from 258 persons with Alzheimers disease (AD) participating in a randomized, placebo-controlled clinical trial with two treatment arms. Two analyses to estimate whether change in QOL occurred over 12 months are described. A simple difference (later - earlier) was calculated from total scores (standard approach). A Qualified Change algorithm (novel approach) was applied to each item: differences in ratings were classified as either: improved, worsened, stayed poor, or stayed positive (fair, good, excellent). The strengths of evidence supporting a claim that QOL changed, derived from the two analyses, were compared by considering plausible alternative explanations for, and interpretations of, results obtained under each approach.Total score approach: QOL total scores decreased, on average, in the two treatment (both -1.0, Totalling the subjective QOL item ratings collapses over items, and suggests a potentially misleading overall level of change (or no change, as in the placebo arm). Leaving the items as individual components of quality of life they were intended to capture, and qualifying the direction and amount of change in each, suggests that at least 17% of any group experienced change on every item, with 60% of all observed change being worsening.Summarizing QOL item ratings as a total score collapses over the face-valid, multi-dimensional components of the construct quality of life. Qualified Change provides robust evidence of changes to QOL or enhancements of life quality.

Loading Collaborative for Research on Outcomes and Metrics collaborators
Loading Collaborative for Research on Outcomes and Metrics collaborators