Time filter

Source Type

Overland Park, KS, United States

Dalton J.E.,Health Outcomes Sciences | Dalton J.E.,Case Western Reserve University
Statistics in Medicine | Year: 2013

Calibration in binary prediction models, that is, the agreement between model predictions and observed outcomes, is an important aspect of assessing the models' utility for characterizing risk in future data. A popular technique for assessing model calibration first proposed by D. R. Cox in 1958 involves fitting a logistic model incorporating an intercept and a slope coefficient for the logit of the estimated probability of the outcome; good calibration is evident if these parameters do not appreciably differ from 0 and 1, respectively. However, in practice, the form of miscalibration may sometimes be more complicated. In this article, we expand the Cox calibration model to allow for more general parameterizations and derive a relative measure of miscalibration between two competing models from this more flexible model. We present an example implementation using data from the US Agency for Healthcare Research and Quality. © 2012 John Wiley & Sons, Ltd. Source

Mascha E.J.,Health Outcomes Sciences | Imrey P.B.,Cleveland Clinic
Statistics in Medicine | Year: 2010

Frequently in clinical studies a primary outcome is formulated from a vector of binary events. Several methods exist to assess treatment effects on multiple correlated binary outcomes, including comparing groups on the occurrence of at least one among the outcomes ('collapsed composite'), on the count of outcomes observed per subject, on individual outcomes adjusting for multiplicity, or with multivariate tests postulating either common or distinct effects across outcomes. We focus on a 1-df distinct effects test in which the estimated outcomeβspecific treatment effects from a GEE model are simply averaged, and compare it with other methods on clinical and statistical grounds.Using a flexible method to simulate multivariate binary data, we show that the relative efficiencies of the assessed tests depend in a complex way on the magnitudes and variabilities of component incidences and treatment effects, as well as correlations among component events. While other tests are easily 'driven' by high-frequency components, the average effect GEE test is not, since it averages the log odds ratios unweighted by the component frequencies. Thus, the average effect test is relatively more powerful than other tests when lower frequency components have stronger associations with a treatment or other predictor, but less powerful when higher frequency components are more strongly associated. In studies when relative effects are at least as important as absolute effects, or when lower frequency components are clinically most important, this test may be preferred. Two clinical trials are discussed and analyzed, and recommendations for practice are made. Copyright © 2010 John Wiley & Sons, Ltd. Source

OBJECTIVE:: We aimed to evaluate variations in patient experience measures across different surgical specialties and to assess the impact of further case-mix adjustment. BACKGROUND:: Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) is a publicly reported survey of patients’ hospital experiences that directly influence Medicare reimbursement. METHODS:: All adult surgical inpatients meeting criteria for HCAHPS sampling from 2013 to 2014 at a single academic center were identified. HCAHPS measures were analyzed according to published top-box and Star-rating methodologies, and were dichotomized (“high” vs “low”). Multivariable logistic regression was used to identify independent associations of high patient scores on various HCAHPS measures with specialty, diagnosis-related group complexity, cancer diagnosis, sex, and emergency admission after adjusting for HCAHPS case-mix adjusters (education, overall health status, language, and age). RESULTS:: We identified 36,551 eligible patients, of which 30.8% (n = 11,273) completed HCAHPS. Women [odds ratio (OR) 0.78, 95% confidence interval (CI) 0.72–0.85, P < 0.001], complex cases (OR 0.90, 95% CI 0.82–0.99, P = 0.02), and emergency admissions (OR 0.67, 95% CI 0.55–0.82, P < 0.001) had lesser Star scores on adjusted analysis, whereas patients with a cancer diagnosis had greater Star scores (OR 1.15, 95% CI 1.03–1.29, P = 0.01). Using general surgery as the reference, the Star scores varied significantly across 12 specialties (range OR 0.65 for plastics to 1.29 for transplant surgery). Patient responses to individual composite scores (pain, care transition, physician, and nurse) varied by specialty. CONCLUSIONS:: HCAHPS case-mix adjustment does not include adjustment for specialty or diagnosis, which may result in artificially lower scores for centers that provide a high level of complex care. Further research is needed to ensure that the HCAHPS is an unbiased comparison tool. Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved. Source

OBJECTIVE:: We tested the primary hypothesis that surgical site infections (SSIs) are more common in patients who had longer periods of intraoperative low blood pressure. Our secondary hypothesis was that hospitalization is prolonged in patients experiencing longer periods of critically low systolic blood pressure (SBP) and/or mean arterial pressure (MAP). BACKGROUND:: Hypotension compromises local tissue perfusion, thereby reducing tissue oxygenation. Hypotension might thus be expected to promote infection, but the extent to which low blood pressure contributes remains unclear. METHODS:: We considered patients who had colorectal surgery lasting at least 1 hour at the Cleveland Clinic between 2009 and 2013. The duration of hypotensive exposure and development of SSI was assessed with logistic regression; the association between hypotensive exposure and duration of hospitalization was assessed with Cox proportional hazard regression. RESULTS:: A total of 2521 patients were eligible for analysis. There was no adjusted association between SBP hypotension Source

The authors received anecdotal practice information from clinicians indicating that when warfarin was initiated in the hospital setting, it may be associated with an increased length of stay (LOS): specifically to achieve a desired minimum international normalized ratio (INR) of 2.0 before discharge in a subset of patients where clinicians perceived follow-up after discharge was not deemed optimal. Given that oral thromboprophylactic anticoagulation with warfarin is the mainstay treatment for the prevention of stroke in atrial fibrillation (AF), the authors decided to look at hospitalized patients from this population to determine if a subset of these patients experienced an increased LOS. The study design entailed a retrospective chart review of consecutive patients admitted to a large, tertiary care, academic center. Patients were included if they were admitted with a primary, secondary, or most responsible diagnosis of paroxysmal or chronic AF. Medical records were audited over an 18-month period (February 1, 2009, to July 31, 2010) to determine the average LOS and to identify patients with a documented prolonged LOS secondary due to subtherapeutic INR at the time of potential discharge. Our final study cohort of 189 patients had an average LOS of 5.2 days (SD = 5.2). However, for eight (4.2%) of these patients discharge was delayed an additional 2.25 days (SD = 1.3) for reasons solely attributed to achieving a therapeutic INR. Source

Discover hidden collaborations