Time filter

Source Type

Rockville Centre, Maryland, United States

Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 601.92K | Year: 2013

This project will examine mobile devices, specifically smartphones and tablet computers, as vehicles for survey data collection. The appeal of these devices for survey researchers is clear. Because they are lightweight and relatively inexpensive, they make it easier to collect data using such existing survey modes as computer-assisted personal interviewing. The research will examine three issues raised by use of such devices. First, the input methods that these devices permit (such as touchscreen interfaces) are relatively unfamiliar to many users and may create response problems. Although these interfaces are sometimes used on laptops, tablets and smartphones require them, making usability concerns more central. Second, the screens on tablets and smartphones are considerably smaller than those on laptop or desktop computers. Experiments on web surveys demonstrate the importance of visual prominence. Any information that respondents need to use should be immediately visible to them without their having to perform any action (such as a mouse click) to make the information visible. Even the need for an eye movement may effectively render information invisible. Because of the small screens on mobile devices, it may be much harder to make all of the potentially useful information visible to respondents than it is with a laptop or desktop computer. The final issue is the perceived privacy of data collected on these devices. Respondents are willing to reveal sensitive information about themselves when a computer administers the questions, and web surveys seem to retain the advantages of earlier methods of computerized self-administration. But it is unclear whether respondents will display the same level of candor when the survey is administered over the Internet on a tablet computer or a smartphone. Two realistic field experiments and a usability study will examine these issues. Both experiments will be conducted in a single, face-to-face survey. The first experiment will compare laptop computers with tablets and smartphones and will examine the effects of both screen size and input method on breakoffs, missing data, completion times, and indicators of the quality of the responses. The second experiment will compare the same three data collection platforms as vehicles for collecting sensitive information. The experiment will ask respondents to assess the sensitivity of the questions, because item sensitivity may vary as a function of the device used to collect the data.

Surveys are a central tool for social scientists and policymakers in the United States, and survey research is a multi-billion dollar industry in the United States alone. Any set of technological advances, such as the widespread adoption of smartphones and tablet computers, is likely to have a major impact on how surveys are done. Although mobile devices will be widely used for surveys regardless of whether this research is done, the work will produce practical guidelines for using such devices to collect survey data and will alert survey researchers to some of the potential pitfalls of these devices.

Agency: NSF | Branch: Continuing grant | Program: | Phase: | Award Amount: 71.27K | Year: 2011

Every request to take part in a survey is framed in some way. This project consists of a set of experiments that investigate how the presentation of the survey request affects nonresponse and measurement error. The experiments are guided by a theory of survey participation (the salience-leverage theory) that claims that people decide whether to take part in a survey based on whatever aspects of the survey are made salient in the presentation of the survey request and on how they evaluate those features. Two initial experiments randomly vary the description of the topic and sponsor of the survey, with hypothesized effects both on nonresponse propensities and on reporting. In the third experiment, survey design features that can mediate or reduce the error-producing influences of the survey topic and sponsor will be examined. Thus, the project experimentally tests mechanisms producing nonresponse bias and measurement errors and, once these effects have been documented, provides guidance to the survey practitioner about how to reduce their impact.

While the research is theoretically motivated and features experimental control, there are important practical implications of the work for the federal statistical agencies and the larger survey community. Sometimes estimates of key social indicators (e.g., the prevalence of rape or the frequency of defensive use of handguns) vary widely across surveys. The effects explored in this project may help explain these discrepancies. In addition, this work will a) help agencies conducting surveys anticipate when different sponsors may obtain different results, b) provide evidence about potentially harmful effects on nonresponse error and measurement error of emphasizing a single purpose of a survey, and c) produce evidence regarding design features that can reduce the effects of the presentation of the survey on nonresponse and measurement error.

Agency: NSF | Branch: Standard Grant | Program: | Phase: PROGRAM EVALUATION | Award Amount: 1.14M | Year: 2013

This research study examines the extent and ways in which the first three Cohorts of MSP Partnership projects have sustained the accomplishments made during their respective grant periods. The study addresses two interrelated research questions: 1) What strategies did the initial cohorts of MSPs use to sustain and nurture their outcomes beyond their NSF award? and 2) What were the mediators that either facilitated or hindered projects efforts to sustain these outcomes? Guided by a logic model that lays out a proposed theory of sustainability, the study uses a mixed method approach to investigate factors influencing those changes at both school district and Institution of Higher Education (IHE) levels. The study is conducted in two phases: Phase 1 involves document review, discussions with current and former NSF project officers, and interviews with PIs and Co-PIs to provide a broad brush examination of project sustainability. Phase 2 includes case studies in a sample of projects to provide a fuller assessment of facilitators and hindrances. The specific approach for each case study will depend on the type of practice being sustained and the partners and participants involved in the practice.

Agency: NSF | Branch: Standard Grant | Program: | Phase: METHOD, MEASURE & STATS | Award Amount: 273.27K | Year: 2011

This research examines three forms of survey measurement error and investigates the relations among them. The first form of measurement error affects questions designed to identify members of the population eligible for a given survey (for example, persons over 65 years old). Several studies find that members of the eligible population are underreported in screening interviews. Although no survey perfectly covers its target population, surveys aimed at specific subpopulations seem especially prone to undercover that particular population. The second form of measurement error involves filter questions. These are questions that, depending on how they are answered, either lead to additional follow-up questions or to the respondents skipping out of the follow-up items. Many survey researchers believe that respondents are likely to give false answers to the filter questions in order to avoid the follow-up questions. As a result, many surveys ask the filter questions at the beginning of the questionnaire and administer the follow-up questions later on rather than interleaving the filter and follow-up questions. The final form of measurement error involves conditioning, or time-in-sample, effects. Over the last forty years, many survey researchers have suggested that respondents in ongoing panel surveys report fewer relevant events across waves of the panel survey and across time periods in a diary survey.

What the three phenomena appear to have in common is underreporting motivated by the desire to reduce the effort needed to complete the questionnaire. But it is not clear whether these forms of error result from something the interviewers do, something the respondents do, or both. The proposed studies use both new experiments and analyze existing data to try to pinpoint the locus of these effects (interviewers versus respondents) and to explore the effectiveness of different methods for reducing these errors. The project will contribute to the improvement of various national statistics that are derived from survey items affected by these problems. The project also will further the training of graduate students and contribute to the professional training of survey researchers at both institutions. The research is supported by the Methodology, Measurement, and Statistics Program and a consortium of federal statistical agencies as part of a joint activity to support research on survey and statistical methodology.

Agency: NSF | Branch: Contract | Program: | Phase: RESEARCH & DEVELOPMENT STATIST | Award Amount: 3.19M | Year: 2011


Discover hidden collaborations