Time filter

Source Type

Tractenberg R.E.,Georgetown University | Tractenberg R.E.,Collaborative for Research on Outcomes and Metrics | Yumoto F.,Collaborative for Research on Outcomes and Metrics | Yumoto F.,University of Maryland University College | And 3 more authors.

We used a Guttman model to represent responses to test items over time as an approximation of what is often referred to as "points lost" in studies of cognitive decline or interventions. To capture this meaning of "point loss", over four successive assessments, we assumed that once an item is incorrect, it cannot be correct at a later visit. If the loss of a point represents actual decline, then failure of an item to fit the Guttman model over time can be considered measurement error. This representation and definition of measurement error also permits testing the hypotheses that measurement error is constant for items in a test, and that error is independent of "true score", which are two key consequences of the definition of "measurement error" -and thereby, reliability- under Classical Test Theory. We tested the hypotheses by fitting our model to, and comparing our results from, four consecutive annual evaluations in three groups of elderly persons: a) cognitively normal (NC, N = 149); b) diagnosed with possible or probable AD (N = 78); and c) cognitively normal initially and a later diagnosis of AD (converters, N = 133). Of 16 items that converged, error-free measurement of "cognitive loss" was observed for 10 items in NC, eight in converters, and two in AD. We found that measurement error, as we defined it, was inconsistent over time and across cognitive functioning levels, violating the theory underlying reliability and other psychometric characteristics, and key regression assumptions. © 2012 Tractenberg et al. Source

Barnes L.L.,Rush University Medical Center | Yumoto F.,Collaborative for Research on Outcomes and Metrics | Capuano A.,Rush University Medical Center | Wilson R.S.,Rush University Medical Center | And 3 more authors.
Journal of the International Neuropsychological Society

Older African Americans tend to perform more poorly on cognitive function tests than older Whites. One possible explanation for their poorer performance is that the tests used to assess cognition may not reflect the same construct in African Americans and Whites. Therefore, we tested measurement invariance, by race and over time, of a structured 18-test cognitive battery used in three epidemiologic cohort studies of diverse older adults. Multi-group confirmatory factor analyses were carried out with full-information maximum likelihood estimation in all models to capture as much information as was present in the observed data. Four different aspects of the data were fit to each model: comparative fit index (CFI), standardized root mean square residuals (SRMR), root mean square error of approximation (RMSEA), and model . We found that the most constrained model fit the data well (CFI=0.950; SRMR=0.051; RMSEA=0.057 (90% confidence interval: 0.056, 0.059); the model =4600.68 on 862 df), supporting the characterization of this model of cognitive test scores as invariant over time and racial group. These results support the conclusion that the cognitive test battery used in the three studies is invariant across race and time and can be used to assess cognition among African Americans and Whites in longitudinal studies. Furthermore, the lower performance of African Americans on these tests is not due to bias in the tests themselves but rather likely reflect differences in social and environmental experiences over the life course. (JINS, 2016, 22, 66-75) Copyright © The International Neuropsychological Society 2015. Source

Tractenberg R.E.,Georgetown University | Tractenberg R.E.,Collaborative for Research on Outcomes and Metrics | Pietrzak R.H.,Yale University | Pietrzak R.H.,Collaborative for Research on Outcomes and Metrics

Background/Aims: To explore different definitions of intra-individual variability (IIV) to summarize performance on commonly utilized cognitive tests (Mini Mental State Exam; Clock Drawing Test); compare them and their potential to differentiate clinically-defined populations; and to examine their utility in predicting clinical change in individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Methods: Sample statistics were computed from ADNI cohorts with no cognitive diagnosis, a diagnosis of mild cognitive impairment (MCI), and a diagnosis of possible or probable Alzheimer's disease (AD). Nine different definitions of IIV were computed for each sample, and standardized effect sizes (Cohen's d) were computed for each of these definitions in 500 simulated replicates using scores on the Mini Mental State Exam and Clock Drawing Test. IIV was computed based on test items separately ('within test' IIV) and the two tests together ('across test' IIV). The best performing definition was then used to compute IIV for a third test, the Alzheimer's Disease Assessment Scale-Cognitive, and the simulations and effect sizes were again computed. All effect size estimates based on simulated data were compared to those computed based on the total scores in the observed data. Association between total score and IIV summaries of the tests and the Clinician's Dementia Rating were estimated to test the utility of IIV in predicting clinically meaningful changes in the cohorts over 12- and 24-month intervals. Results: ES estimates differed substantially depending on the definition of IIV and the test(s) on which IIV was based. IIV (coefficient of variation) summaries of MMSE and Clock-Drawing performed similarly to their total scores, the ADAS total performed better than its IIV summary. Conclusion: IIV can be computed within (items) or across (totals) items on commonly-utilized cognitive tests, and may provide a useful additional summary measure of neuropsychological test performance. © 2011 Tractenberg, Pietrzak. Source

Discover hidden collaborations