Time filter

Source Type

Smolinski M.S.,The Global Fund | Crawley A.W.,The Global Fund | Baltrusaitis K.,Boston Childrens Hospital Informatics Program | Chunara R.,Boston Childrens Hospital Informatics Program | And 5 more authors.
American Journal of Public Health

Objectives. We summarized Flu Near You (FNY) data from the 2012?2013 and 2013?2014 influenza seasons in the United States. Methods. FNY collects limited demographic characteristic information upon registration, and prompts users each Monday to report symptoms of influenzalike illness (ILI) experienced during the previous week. We calculated the descriptive statistics and rates of ILI for the 2012?2013 and 2013?2014 seasons. We compared raw and noise-filtered ILI rates with ILI rates from the Centers for Disease Control and Prevention ILINet surveillance system. Results. More than 61 000 participants submitted at least 1 report during the 2012?2013 season, totaling 327 773 reports. Nearly 40 000 participants submitted at least 1 report during the 2013?2014 season, totaling 336 933 reports. Rates of ILI as reported by FNY tracked closely with ILINet in both timing and magnitude. Conclusions. With increased participation, FNY has the potential to serve as a viable complement to existing outpatient, hospital-based, and laboratory surveillance systems. Although many established systems have the benefits of specificity and credibility, participatory systems offer advantages in the areas of speed, sensitivity, and scalability. Source

Santillana M.,Harvard University | Nguyen A.T.,Harvard University | Dredze M.,Johns Hopkins University | Paul M.J.,University of Colorado at Boulder | And 4 more authors.
PLoS Computational Biology

We present a machine learning-based methodology capable of providing real-time (“nowcast”) and forecast estimates of influenza activity in the US by leveraging data from multiple data sources including: Google searches, Twitter microblogs, nearly real-time hospital visit records, and data from a participatory surveillance system. Our main contribution consists of combining multiple influenza-like illnesses (ILI) activity estimates, generated independently with each data source, into a single prediction of ILI utilizing machine learning ensemble approaches. Our methodology exploits the information in each data source and produces accurate weekly ILI predictions for up to four weeks ahead of the release of CDC’s ILI reports. We evaluate the predictive ability of our ensemble approach during the 2013–2014 (retrospective) and 2014–2015 (live) flu seasons for each of the four weekly time horizons. Our ensemble approach demonstrates several advantages: (1) our ensemble method’s predictions outperform every prediction using each data source independently, (2) our methodology can produce predictions one week ahead of GFT’s real-time estimates with comparable accuracy, and (3) our two and three week forecast estimates have comparable accuracy to real-time predictions using an autoregressive model. Moreover, our results show that considerable insight is gained from incorporating disparate data streams, in the form of social media and crowd sourced data, into influenza predictions in all time horizons. © 2015 Santillana et al. Source

Mandl K.D.,Boston Childrens Hospital Informatics Program | Mandl K.D.,Harvard University | Olson K.L.,Boston Childrens Hospital Informatics Program | Olson K.L.,Harvard University | And 3 more authors.
Journal of General Internal Medicine

Objective: Claims data enable identification of the constellation of providers caring for a single patient. To indirectly measure teamwork and provider collaboration, we measure recurrence of provider constellations and cohesion among providers.Design: Retrospective analysis of commercial healthcare claims from a single insurer.Participants: Patients with claims for office visits and their outpatient providers. To maximize capture of provider panels, the cohort was drawn from the four regions with the highest plan coverage. Regional outpatient provider networks were constructed with providers as nodes and number of shared patients as links.Conclusion: Stunning variability in the constellations of providers caring for patients may challenge underlying assumptions about the current state of teamwork in healthcare.Main Measures: Measures of cohesion and stability of provider constellations derived from the networks of providers to quantify patient sharing.Results: For 10,325 providers and their 521,145 patients, there were 2,641,933 collaborative provider pairs sharing at least one patient. Fifty-four percent only shared a single patient, and 19 % shared two. Of 15,449,835 unique collaborative triads, 92 % shared one patient, 5 % shared two, and 0.2 % shared ten or more. Patient constellations had a median of four providers. Any precise constellation recurred rarely—89 % with exactly two providers shared just one patient and only 4 % shared over two; 97 % of constellations with exactly three providers shared just one patient. Four percent of constellations with 2+ providers were not at all cohesive, sharing only the hub patient. In the remaining constellations, a median of 93 % of provider pairs shared at least one additional patient beyond the hub patient.Background: There is a natural assumption that quality and efficiency are optimized when providers consistently work together and share patients. Diversity in composition and recurrence of groups that provide face-to-face care to the same patients has not previously been studied. © 2014, Society of General Internal Medicine. Source

Pfiffner P.B.,Boston Childrens Hospital Informatics Program | Pfiffner P.B.,Harvard University | Oh J.,Wellesley College | Miller T.A.,Boston Childrens Hospital Informatics Program | And 3 more authors.

Background: Implementing semi-automated processes to efficiently match patients to clinical trials at the point of care requires both detailed patient data and authoritative information about open studies.Objective: To evaluate the utility of the ClinicalTrials.gov registry as a data source for semi-automated trial eligibility screening.Results:24% (104 of 437) of trials declaring on open recruitment status list a study completion date in the past, indicating out of date records. Substantial barriers to automated eligibility interpretation in free form text are present in 81% to up to 94% of all trials. We were unable to contact coordinators at 31% (45 of 146) of the trials in the subset, either by phone or by email. Only 53% (74 of 146) would confirm that they were still recruiting patients.Methods: Eligibility criteria and metadata for 437 trials open for recruitment in four different clinical domains were identified in ClinicalTrials.gov. Trials were evaluated for up to date recruitment status and eligibility criteria were evaluated for obstacles to automated interpretation. Finally, phone or email outreach to coordinators at a subset of the trials was made to assess the accuracy of contact details and recruitment status.Conclusion: Because ClinicalTrials.gov has entries on most US and many international trials, the registry could be repurposed as a comprehensive trial matching data source. Semi-automated point of care recruitment would be facilitated by matching the registry's eligibility criteria against clinical data from electronic health records. But the current entries fall short. Ultimately, improved techniques in natural language processing will facilitate semi-automated complex matching. As immediate next steps, we recommend augmenting ClinicalTrials.gov data entry forms to capture key eligibility criteria in a simple, structured format. © 2014 Pfiffner et al. Source

Discover hidden collaborations