News Article | October 25, 2016
Hillary Clinton is heading for a landslide victory over Donald Trump. But wait. Trump is pulling ahead and could take the White House. No, Clinton has a clear lead and is gaining ground. Nearly every day, a new poll comes out touting a different result, leaving voters wondering what to believe. The results of recent elections give even more reason for scepticism. In 2013, the Liberal Party of Canada confounded expectations when it won the provincial elections in British Columbia. The following year, polls overestimated support for Democrats in the US congressional elections. And this year, some pollsters underestimated Britons’ support for leaving the European Union in the Brexit referendum. These blunders have led some political commentators to say that polls are headed for the graveyard. “It’s harder and harder to find people willing to pay for any polls, given their poor performance this year and last year. They’re heavily discredited in the UK,” says Stephen Fisher, a political sociologist at the University of Oxford. As the US presidential election approaches, pollsters are scrambling to improve their methods and avoid another embarrassing mistake. Their job is getting harder. Until as recently as ten years ago, polling organizations were able to tap into public opinion simply by calling people at home. But large segments of the population in developed countries have given up their landlines for mobile phones. That is making them more difficult for pollsters to reach because people will often not answer calls from unfamiliar numbers. So the pollsters are fighting back. They are fine-tuning their efforts in reaching mobile phones, using statistical tools to correct for biases and turning to online surveys. The increasing number of online polls has prompted the formation of polling aggregates, such as FiveThirtyEight, RealClearPolitics and Huffington Post, which combine and average the results to develop more nuanced forecasts. “Polling’s going through a series of transitions. It’s more difficult to do now,” says Cliff Zukin, a political scientist at Rutgers University in New Brunswick, New Jersey. “The paradigm we’ve used since the 1960s has broken down and we’re evolving a new one to replace it — but we’re not there yet.” The ingredients of an accurate poll are fairly simple, but they can be hard to find, and everyone uses a different recipe to pull them all together. Start by recruiting a large group of people — preferably more than 1,000. The sample should be split evenly between women and men. And it should reflect the population’s mix in terms of race, education, income and geographical distribution, to represent these groups’ different views and voting behaviours. Once the data are in hand, pollsters analyse the gaps in their sample and weight the results to account for groups that are under-represented. “Polling is an art, but it’s largely a scientific endeavour,” says Michael Link, president and chief executive of Abt SRBI polling firm in New York City and former president of the American Association for Public Opinion Research. It’s also a process that is conducted behind closed doors. Polls are run by a mix of companies and academic groups, but they are generally commissioned by news organizations and political groups. As a result, pollsters rarely share the details of their techniques. “There’s a lot of people who make a living doing this, and whose reputations are set on it,” says Jill Darling, survey director at the University of Southern California’s Center for Economic and Social Research in Los Angeles. The data-gathering part of polling used to be relatively easy in developed countries. Pollsters simply called people at home — at first, by hand, and later with automatic diallers in the United States. But landlines are quickly going the way of the telegraph (see ‘The line on voters’). In 2008, more than eight in every ten US households had landlines; by 2015, that number had dropped to five and it continues to decline. In the United Kingdom, more people have landlines but the fraction is dropping. As of this year, 53% of them claim that they never or rarely use them. The mobile revolution has hit pollsters hard in the United States because federal regulations require that mobile phones be called manually. And people often do not answer calls to their mobiles when an unfamiliar number pops up. In 1997, pollsters could get a response rate of 36% but that has dropped to just 10% or less now. As a result, pollsters are struggling to reach as many people, and costs are going up: each mobile-phone interview costs about twice as much as a landline one. There is also a ‘non-response bias’, because people who respond to pollsters’ calls sometimes do not reflect a representative sample, says Frederick Conrad, head of the Program in Survey Methodology at the University of Michigan in Ann Arbor. Despite the expense and difficulty of calling people, this method still produces the most accurate results, says Courtney Kennedy, director of survey research at the Pew Research Center in Washington DC. US pollsters now call mobile phones for more than half of their samples, and that fraction will probably rise as more and more people ditch their landlines. Pollsters are also grappling with another major problem — predicting who will vote. That is likely to be unusually difficult in the United States this year because many voters aren’t enamoured of the leading candidates, who have historically low approval ratings. US national elections typically have turnouts of 40–55%, lower than most other developed countries, according to the Organisation for Economic Co-operation and Development. In the United Kingdom, by contrast, 60–70% of the eligible population usually votes. Richer, older, better-educated people, and those who voted in the previous election, are more likely to vote, but this varies with each election. Pollsters typically base their estimates of turnout on their own proprietary mix of factors such as respondents' voting history, whether they’re registered with a political party, their engagement with politics, whether they say they're planning to vote, as well as demographics and socioeconomic factors. “‘Likely voter’ modelling is notoriously the secret-sauce aspect of polling,” says Kennedy. It’s also one of the most difficult parts of accurate polling. In the 2014 mid-term US election, most pollsters failed in their forecasts of Democratic voting. Turnout was just 36% — a record low in the past 70 years — which disproportionately depressed votes for Democratic candidates. In the 2015 UK general election, most major pollsters, including ICM Unlimited and YouGov, underestimated the turnout of older, Conservative Party voters, according to an inquiry published in March by the British Polling Council and Market Research Society1. The inquiry also found that pollsters have systematic biases in their samples. They tend to have too many Labour supporters at the expense of Conservative ones. They had applied weighting and adjustment procedures to the raw data, but this has not mitigated the bias problem. Another source of error identified in the report is “herding” — when pollsters consciously or unconsciously adjust their polls so that their results seem similar to those released earlier, causing the polls to converge. The bias in favour of left-leaning parties is not unique to the United Kingdom. The inquiry analysed more than 30,000 polls from 45 countries and found a similar, although smaller, bias. The report did not give an explanation for why, but some pollsters in the United States and Britain attribute the trend to inaccurate predictions of who will turn up to vote. In the case of the United Kingdom, the panel recommended that pollsters work to obtain more representative samples and to investigate better ways to weight them. Pollsters are also trying to improve their accuracy by changing how they model likely voters. In the past, they treated their sample in a binary fashion: determining how many would turn out on election day and how many would stay home. Now they tend to assign a probability to whether someone will vote. More transparency could help. Pollsters in the United Kingdom share their methodologies with the British Polling Council, which aided the recent investigation and has led to fruitful debates about ways to improve accuracy, says Fisher, who participated in the inquiry. Even if polling organizations manage to collect a representative sample, they can’t always trust the responses that people give them. One of the starkest examples in the United States came in the 1982 election for California’s governor. Los Angeles Mayor Tom Bradley, an African American, was consistently leading in the polls but lost the election by a narrow margin. Afterwards, pollsters suggested that the discrepancy arose because some voters might not have wanted to admit that they would not support an African American candidate. This is now known as the ‘Bradley effect’. A variation on this is the ‘shy Tory effect’, named after Conservative-leaning voters in the United Kingdom who hide their views or misreport their intentions to pollsters. That makes some experts wonder whether a shy Trump effect might come into play in the forthcoming US election — in which a fraction of voters are embarrassed about or reluctant to admit their support for Trump or opposition to Clinton. But most major pollsters doubt that this will be a major factor because polls before the Republican primary elections gauged support for Trump accurately and he has performed similarly in online polls and in ones that use live interviews. Advanced technology may allow pollsters to get a better read on voters’ true feelings. Online polls, for instance, allow people to respond at their convenience and state their intentions without fear of judgement from a live interviewer. They also make it easy to collect thousands of responses in a short time and at a lower cost: about US$30,000 for a 12-minute survey as opposed to more than US$70,000 for a similar telephone one, says Chris Jackson, vice-president at Ipsos Public Affairs, a global market-research and polling firm in Washington DC. But online polls have challenges, too. They typically recruit by advertising on popular websites, so people choose whether to participate, and that means that there might be a built-in bias in their samples. Pollsters don’t exactly know who is missing from the poll, and it’s harder to estimate the reliability of the final poll numbers. Some pollsters have begun experimenting with polls conducted through text messages. As with online polls, people can choose to respond whenever they want and avoid talking to a person. Michael Schober, a psychologist at The New School for Social Research in New York City, and his colleagues tested the differences between live and text interviews2. “The lack of time pressure and social pressure of texting leads people to disclose more information and be more honest,” he says. Another approach is to assemble a panel of people to survey repeatedly. The most prominent is a University of Southern California Dornsife/Los Angeles Times Presidential Election tracking poll that launched in July. These pollsters randomly selected people on the basis of information from the US Postal Service and contacted them by mail, recruiting 3,000 people to participate each week in their online surveys. Unlike other polls, they need not continually recruit new respondents, and their response rate is at least 15% — higher than for telephone polls. The pollsters have enough data to know the demographics of their sample very well and can have confidence in their trends, says Darling, who leads the survey. However, if their sample turns out to be biased, then all polls for the duration of the sample will be biased. This may be the case with this year’s poll, which leans slightly towards Trump, according to the aggregator FiveThirtyEight. To reduce the risk of bias, researchers are experimenting with a new type of poll. Andrew Gelman, a statistician and political scientist at Columbia University in New York City, and his colleagues have collected a very large set of people and divided them up into tens of thousands of demographic categories. The researchers tested this extreme categorization method on polling data from the 2012 US presidential election, showing that it produced accurate forecasts of state-level results by using highly tuned weights to correct for the non-representative sample3. However, this sophisticated method takes much more time and requires more detailed data than are usually gathered. It could be a glimpse of the future, however. ‘Big data’ are where more accurate results will come from, says Joe Twyman, head of political and social research for Europe, Middle East and Africa at YouGov. “It will be about linking a respondent’s voting data with Internet usage, other survey data, and demographic information, creating a much richer picture of that person, which will allow for more accurate granulations of predictions,” he says. Pollsters would use this information to assess who is likely to vote and to analyse the survey results — for example, by determining which issues most concern different voters. The low cost of Internet polling has triggered a surge in the number of polls of varying quality, making it hard for journalists, policymakers and others to separate the wheat from the chaff. Poll aggregators attempt to weight polls on the basis of the past reliability, but that doesn’t guarantee future success, especially if low-quality and short-lived polling outfits are included in the mix. Contrary to bold claims of the death of polls, practitioners say that they are merely going through a transition. But pollsters do recognize that some of the barriers are insurmountable. As election seasons lengthen and people find more reasons to survey public opinion, the number of polls will continue to rise. Pollsters recognize that they can only ask so much of people, says Gelman. “There’s a non-renewable resource of public trust.”
Newes-Adeyi G.,Abt Associates Inc. |
Greece J.,Abt Associates Inc. |
Bozeman S.,Abt Associates Inc. |
Walker D.K.,Abt Associates Inc. |
And 2 more authors.
Vaccine | Year: 2012
Objectives: We conducted a pilot study of the Integrated Vaccine Surveillance System (IVSS), a novel active surveillance system for monitoring influenza vaccine adverse events that could be used in mass vaccination settings. Methods: We recruited 605 adult vaccinees from a convenience sample of 12 influenza vaccine clinics conducted by public health departments of two U.S. metropolitan regions. Vaccinees provided daily reports on adverse reactions following immunization (AEFI) using an interactive voice response system (IVR) or the internet for 14 consecutive days following immunization. Followup with nonrespondents was conducted through computer-assisted telephone interviewing (CATI). Data on vaccinee reports were available real-time through a dedicated secure website. Results: 90% (545) of vaccinees made at least one daily report and 49% (299) reported consecutively for the full 14-day period. 58% (315) used internet, 20% (110) IVR, 6% (31) CATI, and 16% (89) used a combination for daily reports. Of the 545 reporters, 339 (62%) reported one or more AEFI, for a total of 594 AEFIs reported. The majority (505 or 85%) of these AEFIs were mild symptoms. Conclusions: It is feasible to develop a system to obtain real-time data on vaccine adverse events. Vaccinees are willing to provide daily reports for a considerable time post vaccination. Offering multiple modes of reporting encourages high response rates. Study findings on AEFIs showed that the IVSS was able to exhibit the emerging safety profile of the 2008 seasonal influenza vaccine. © 2011 Elsevier Ltd.
Hernandez-Trujillo H.S.,Children's Hospital of Philadelphia |
Chapel H.,University of Oxford |
Lo Re Iii V.,University of Pennsylvania |
Notarangelo L.D.,Boston Childrens Hospital |
And 8 more authors.
Clinical and Experimental Immunology | Year: 2012
Primary immunodeficiency diseases (PIDs) comprise a heterogeneous group of rare disorders. This study was devised in order to compare management of these diseases in the northern hemisphere, given the variability of practice among clinicians in North America. The members of two international societies for clinical immunologists were asked about their management protocols in relation to their PID practice. An anonymous internet questionnaire, used previously for a survey of the American Academy of Allergy, Asthma and Immunology (AAAAI), was offered to all full members of the European Society for Immunodeficiency (ESID). The replies were analysed in three groups, according to the proportion of PID patients in the practice of each respondent; this resulted in two groups from North America and one from Europe. The 123 responses from ESID members (23·7%) were, in the majority, very similar to those of AAAAI respondents, with >10% of their practice devoted to primary immunodeficiency. There were major differences between the responses of these two groups and those of the general AAAAI respondents whose clinical practice was composed of <10% of PID patients. These differences included the routine use of intravenous immunoglobulin therapy (IVIg) for particular types of PIDs, initial levels of IVIg doses, dosing intervals, routine use of prophylactic antibiotics, perceptions of the usefulness of subcutaneous immunoglobulin therapy (SCIg) and of the risk to patients' health of policies adopted by health-care funders. Differences in practice were identified and are discussed in terms of methods of health-care provision, which suggest future studies for ensuring continuation of appropriate levels of immunoglobulin replacement therapies. © 2012 The Authors. Clinical and Experimental Immunology © 2012 British Society for Immunology.
Acierno R.,Medical University of South Carolina |
Acierno R.,Ralph hnson Veterans Administration Medical Center |
Acierno R.,National Crime Victims Research and Treatment Center |
Hernandez M.A.,Simon Bolivar University of Venezuela |
And 5 more authors.
American Journal of Public Health | Year: 2010
Objectives. We estimated prevalence and assessed correlates of emotional, physical, sexual, and financial mistreatment and potential neglect (defined as an identified need for assistance that no one was actively addressing) of adults aged 60 years or older in a randomly selected national sample. Methods. We compiled a representative sample by random digit dialing across geographic strata. We used computer-assisted telephone interviewing to standardize collection of demographic, risk factor, and mistreatment data. We subjected prevalence estimates and mistreatment correlates to logistic regression. Results. We analyzed data from 5777 respondents. One-year prevalence was 4.6% for emotional abuse, 1.6% for physical abuse, 0.6% for sexual abuse, 5.1% for potential neglect, and 5.2% for current financial abuse by a family member. One in 10 respondents reported emotional, physical, or sexual mistreatment or potential neglect in the past year. The most consistent correlates of mistreatment across abuse types were low social support and previous traumatic event exposure. Conclusions. Our data showed that abuse of the elderly is prevalent. Addressing low social support with preventive interventions could have significant public health implications.
Clagett B.,University of Pennsylvania |
Nathanson K.L.,University of Pennsylvania |
Ciosek S.L.,University of Pennsylvania |
McDermoth M.,University of Pennsylvania |
And 5 more authors.
American Journal of Epidemiology | Year: 2013
Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18-55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS. © 2013 The Author. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved.
Bollen K.A.,University of North Carolina at Chapel Hill |
Kolenikov S.,Abt SRBI |
Bauldry S.,University of Alabama at Birmingham
Psychometrika | Year: 2014
The common maximum likelihood (ML) estimator for structural equation models (SEMs) has optimal asymptotic properties under ideal conditions (e.g., correct structure, no excess kurtosis, etc.) that are rarely met in practice. This paper proposes model-implied instrumental variable - generalized method of moments (MIIV-GMM) estimators for latent variable SEMs that are more robust than ML to violations of both the model structure and distributional assumptions. Under less demanding assumptions, the MIIV-GMM estimators are consistent, asymptotically unbiased, asymptotically normal, and have an asymptotic covariance matrix. They are "distribution-free," robust to heteroscedasticity, and have overidentification goodness-of-fit J-tests with asymptotic chi-square distributions. In addition, MIIV-GMM estimators are "scalable" in that they can estimate and test the full model or any subset of equations, and hence allow better pinpointing of those parts of the model that fit and do not fit the data. An empirical example illustrates MIIV-GMM estimators. Two simulation studies explore their finite sample properties and find that they perform well across a range of sample sizes. © 2013 The Psychometric Society.
Wood R.A.,Johns Hopkins University |
Camargo Jr. C.A.,Massachusetts General Hospital |
Lieberman P.,University of Memphis |
Sampson H.A.,Mount Sinai School of Medicine |
And 7 more authors.
Journal of Allergy and Clinical Immunology | Year: 2014
Background Although anaphylaxis is recognized as an important life-threatening condition, data are limited regarding its prevalence and characteristics in the general population. Objective We sought to estimate the lifetime prevalence and overall characteristics of anaphylaxis. Methods Two nationwide, cross-sectional random-digit-dial surveys were conducted. The public survey included unselected adults, whereas the patient survey captured information from household members reporting a prior reaction to medications, foods, insect stings, or latex and idiopathic reactions in the previous 10 years. In both surveys standardized questionnaires queried anaphylaxis symptoms, treatments, knowledge, and behaviors. Results The public survey included 1,000 adults, of whom 7.7% (95% CI, 5.7% to 9.7%) reported a prior anaphylactic reaction. Using increasingly stringent criteria, we estimate that 5.1% (95% CI, 3.4% to 6.8%) and 1.6% (95% CI, 0.8% to 2.4%) had probable and very likely anaphylaxis, respectively. The patient survey included 1,059 respondents, of whom 344 reported a history of anaphylaxis. The most common triggers reported were medications (34%), foods (31%), and insect stings (20%). Forty-two percent sought treatment within 15 minutes of onset, 34% went to the hospital, 27% self-treated with antihistamines, 10% called 911, 11% self-administered epinephrine, and 6.4% received no treatment. Although most respondents with anaphylaxis reported 2 or more prior episodes (19% reporting ≥5 episodes), 52% had never received a self-injectable epinephrine prescription, and 60% did not currently have epinephrine available. Conclusions The prevalence of anaphylaxis in the general population is at least 1.6% and probably higher. Patients do not appear adequately equipped to deal with future episodes, indicating the need for public health initiatives to improve anaphylaxis recognition and treatment. © 2013 American Academy of Allergy, Asthma & Immunology.
Hebert P.L.,University of Washington |
Hebert P.L.,Puget Sound Medical Center |
Sisk J.E.,Mount Sinai School of Medicine |
Tuzzio L.,Group Health Research Institute |
And 6 more authors.
Journal of General Internal Medicine | Year: 2012
Background: Treated but uncontrolled hypertension is highly prevalent in African American and Hispanic communities. Objective: To test the effectiveness on blood pressure of home blood pressure monitors alone or in combination with follow-up by a nurse manager. Design: Randomized controlled effectiveness trial. Patients: Four hundred and sixteen African American or Hispanic patients with a history of uncontrolled hypertension. Patients with blood pressure 150/95, or 140/85 for patients with diabetes or renal disease, at enrollment were recruited from one community clinic and four hospital outpatient clinics in East and Central Harlem, New York City. Intervention: Patients were randomized to receive usual care or a home blood pressure monitor plus one in-person counseling session and 9 months of telephone follow-up with a registered nurse. During the trial, the home monitor alone arm was added. Main Measures: Change in systolic and diastolic blood pressure at 9 and 18 months. Key Results: Changes from baseline to 9 months in systolic blood pressure relative to usual care was -7.0 mm Hg (Confidence Interval [CI], -13.4 to -0.6) in the nurse management plus home blood pressure monitor arm, and +1.1 mm Hg (95% CI, -5.5 to 7.8) in the home blood pressure monitor only arm. No statistically significant differences in systolic blood pressure were observed among treatment arms at 18 months. No statistically significant improvements in diastolic blood pressure were found across treatment arms at 9 or 18 months. Changes in prescribing practices did not explain the decrease in blood pressure in the nurse management arm. Conclusions: A nurse management intervention combining an in-person visit, periodic phone calls, and home blood pressure monitoring over 9 months was associated with a statistically significant reduction in systolic, but not diastolic, blood pressure compared to usual care in a high risk population. Home blood pressure monitoring alone was no more effective than usual care. © 2011 Society of General Internal Medicine.
Ruggiero K.J.,Medical University of South Carolina |
Ruggiero K.J.,Ralph hnson Veterans Affairs Medical Center |
Gros K.,Medical University of South Carolina |
McCauley J.L.,Medical University of South Carolina |
And 5 more authors.
Disaster Medicine and Public Health Preparedness | Year: 2012
Objective: To examine the mental health effects of Hurricane Ike, the third costliest hurricane in US history, which devastated the upper Texas coast in September 2008. Method: Structured telephone interviews assessing immediate effects of Hurricane Ike (damage, loss, displacement) and mental health diagnoses were administered via random digit-dial methods to a household probability sample of 255 Hurricane Ike-affected adults in Galveston and Chambers counties. Results: Three-fourths of respondents evacuated the area because of Hurricane Ike and nearly 40% were displaced for at least one week. Postdisaster mental health prevalence estimates were 5.9% for posttraumatic stress disorder, 4.5% for major depressive episode, and 9.3% for generalized anxiety disorder. Bivariate analyses suggested that peritraumatic indicators of hurricane exposure severity-such as lack of adequate clean clothing, electricity, food, money, transportation, or water for at least one week-were most consistently associated with mental health problems. Conclusions: The significant contribution of factors such as loss of housing, financial means, clothing, food, and water to the development and/or maintenance of negative mental health consequences highlights the importance of systemic postdisaster intervention resources targeted to meet basic needs in the postdisaster period. © 2012 American Medical Association.
Ruggiero K.J.,Medical University of South Carolina |
Ruggiero K.J.,Ralph hnson Veterans Affairs Medical Center |
Resnick H.S.,Medical University of South Carolina |
Paul L.A.,Medical University of South Carolina |
And 6 more authors.
Contemporary Clinical Trials | Year: 2012
Disasters occur with high frequency throughout the world and increase risk for development of mental health problems in affected populations. Research focused on the development and evaluation of secondary prevention interventions addressing post-disaster mental health has high potential public-health impact. Toward this end, internet-based interventions (IBIs) are particularly attractive in that they: (1) offer a low-cost means of delivering standardized, targeted, personalized intervention content to a broad audience; and (2) are easily integrated within a stepped care approach to screening and service delivery. We describe a unique study design intended to evaluate an IBI with a disaster-affected population-based sample. Description and rationale are provided for sampling selection and procedures, selection of assessment measures and methods, design of the intervention, and statistical evaluation of critical outcomes. Unique features of this intervention include the use of a population-based sample, telephone and internet-based assessments, and development of a highly individualized web-based intervention. Challenges related to the development and large-scale evaluation of IBIs targeting post-disaster mental health problems, as well as implications for future research and practice are discussed. © 2011 Elsevier Inc.