Questionnaire Survey Research: What Works (Second Edition)

QUESTIONNAIRE SURVEY RESEARCH WHAT WORKS A B C D F DOD D D 3 DO 4 D DO 5 DDDDD 2N° EDITION BY LINDA A. SUSKIE QUESTIONNAIRE SURVEY RESEARCH WHAT WORKS Linda A. Suskie Assistant to the President for Special Projects Millersville University Association for Institutional Research Resources for Institutional Research, Number Six Second Ed ition ©1992 Association for Institutional Research ©1996 (Second Edition) Association for Institutional Research 114 Stone Building Florida State University Tallahassee, Florida 32306-3038 All Rights ReseNed No portion of this book may be reproduced by any process, stored in a retrieval system, or transmitted in any form, or by any means, without the express written permission of the publisher. Printed in the United States To order additional copies, contact: AIR 114 Stone Building Florida State University Tallahassee FL 32306-3038 Tel: 904/644-4470 Fax: 904/644-8824 E-Mail: [email protected] Home Page: www.fsu.edu/-air/home.htm ISBN 1-882393-05-8 Table of Contents Preface .. .. . . .. . . . . vi Chapter 1. Planning the Survey What is an ethical survey? . Why are you doing a survey? . .. .. .. .. .. .. .. .. .. .. .. . 3 What have others done on this topic? . . . . 5 Are exit surveys worthwhile? . .. . . . .. . . . 8 Should you use a published questionnaire or design your own? . .. . . 9 Should you hire a contractor to do the survey or do it yourself? . . 11 Whom will you survey? .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. 12 Do you need a sample or a census? . . . 12 How many people should you survey? . . 13 How will you select your sample? . . . 15 What survey design should you use? . . . . 18 Should the survey be anonymous? . .. . . 18 How much will it cost? . .. . . . . 20 Should your survey be reviewed by an institutional review board? . 21 When will you get it all done? . . . . . 21 How can you deal with institutional politics? . . . 22 Chapter 2. Formats for Survey Questions . . . 24 Yes/no questions . . . . . . . 24 Multiple-choice questions . . . 26 Checklists . . . 29 Rankings . 30 Likert rating scales . . . 33 Semantic differential rating scales . . . 36 Ecosystem rating scales . .. . . 37 Other rating scales . .. . 39 Open-ended questions . . . . . 41 Chapter 3. Developing the Survey Instrument . 44 What goes into a good questionnaire item? .. .. .. . .. .. 44 Should you be concerned about the validity and reliability of your questionnaire? . . . . 51 What is reliability? . . . . 52 What kinds of reliability or consistency can you assess? . . . 54 How is reliability established? . . . 55 What is validity? . . . . . . . 56 How is validity established? . . . 57 How should the items be ordered in the questionnaire? . . 60 What else goes into a questionnaire? . . . 61 How long should a questionnaire be? . . .. . . . 63 What should you keep in mind as you have the questionnaire typed and duplicated? . . . . . . 64 What software support is available for preparing questionnaires? . 65 Chapter 4. Conducting the Survey . . . .. . 69 What is an acceptable response rate? . . 69 How can you maximize your response rate? . . 70 What is a questionnaire "package"? . . . .. . 73 What should go into a cover letter? . . . . . . 73 Do you need a return envelope? . . . . 77 How should the questionnaire package be assembled? . .. 78 Why does the questionnaire need a pilot test? . . . . 78 What is an advance mailing and is it worthwhile? . . .. . 79 When should the questionnaire package be mailed? . . . .. .. 80 What should be done as the completed questionnaires come back? . . . 80 Are follow-up mailings worth the time and expense? . . . . 80 What form should follow-ups take? . . . 81 When should follow-ups be mailed? . . . . 82 Can you infer anything about non-respondents? . . 82 Why not administer the survey in person? . .. . .. . . 83 How should you administer an in-person survey? . . . . . 83 Why not conduct a telephone survey? . 84 How should you conduct a telephone survey? . . . . 85 Why not use focus groups? . . . . 86 How should you conduct a focus group? . .. . . . 88 ii What tasks can be delegated to someone else? . 89 What can a personal computer do? . 90 Chapter 5. Processing the Survey Results . 91 What should you look for as you edit responses? . 91 What is coding? . 92 How should responses be coded? . 92 What general coding principles should be followed? . 94 How are coded data entered into the computer? . 95 What are coding sheets and how are they prepared? . 96 How can data entry instructions be put directly on a questionnaire? . 97 What is a code book? . 99 Chapter 6. Analyzing the Survey Data 100 Do you simply want to describe your group? . 100 How well do your sample results correspond to the views of the entire student body? . 101 How is the error margin calculated? . 102 Do you want to try to explain why your group responds or behaves in a certain way? . 103 Do you want to see if your group is different from an established norm or a theorized value? Or do you want to see if your sample is truly representative of your population? . 104 Do you want to see if two or more subgroups are different from each other? . 1 05 Do you want to see if two or more responses from your group are different? Or do you want to see if your group has changed over a period of time? . 105 Do you want to look for interrelationships among your group's responses? . 106 Do you want to study the relationship of several responses to one particular response or factor? . 106 Do you simply want to examine the relationship among responses to two or more questions? . 107 How should census data be analyzed? . 108 How should you check if your respondents represent the group from which they're taken? . 109 What else should you consider as you choose your analyses? . 110 iii How should the results of a statistical analysis be reported? .. . . .. 111 What else should go into the data analysis computer program? . . . 112 What software is available to analyze the data? . .. . .. .. . . . 113 How should the data be stored? . . . . . 114 Chapter 7. Reporting the Survey Results . . 115 Who will read and use your findings? . . . .. . . . 115 In what form do your readers need the findings? . . . . . 116 What are the components of a survey research report? . . . . .. 118 How else can you keep your report interesting, readable, and useful? .. . .. 124 How can you make your tables and graphs effective? . . . . 130 When should you use a graph rather than a table? . . . 134 How can you make an oral presentation interesting and effective? . . 135 Postscript: Reacting To Other Surveys 137 Appendices 140 Appendix 1. Association for Institutional Research Code of Ethics . . 140 Appendix 2. Sample time line for a survey research project . . . 145 Appendix 3. Examples of questionnaires using ecosystems . . .. 146 Appendix 4. Other examples of questionnaire surveys .. .

Recommended publications Statistical Inference: How Reliable Is a Survey?

Math 203, Fall 2008: Statistical Inference: How reliable is a survey? Consider a survey with a single question, to which respondents are asked to give an answer of yes or no. Suppose you pick a random sample of n people, and you find that the proportion that answered yes isp ˆ. Question: How close isp ˆ to the actual proportion p of people in the whole population who would have answered yes? In order for there to be a reliable answer to this question, the sample size, n, must be big enough so that the sample distribution is close to a bell shaped curve (i.e., close to a normal distribution). But even if n is big enough that the distribution is close to a normal distribution, usually you need to make n even bigger in order to make sure your margin of error is reasonably small. Thus the first thing to do is to be sure n is big enough for the sample distribution to be close to normal. The industry standard for being close enough is for n to be big enough so that 1 − p 1 − p n > 9 and n > 9 p p both hold. When p is about 50%, n can be as small as 10, but when p gets close to 0 or close to 1, the sample size n needs to get bigger. If p is 1% or 99%, then n must be at least 892, for example. (Note also that n here depends on p but not on the size of the whole population.) See Figures 1 and 2 showing frequency histograms for the number of yes respondents if p = 1% when the sample size n is 10 versus 1000 (this data was obtained by running a computer simulation taking 10000 samples).

Those Missing Values in Questionnaires

Those Missing Values in Questionnaires John R. Gerlach, Maxim Group, Plymouth Meeting, PA Cindy Garra, IMS HEALTH; Plymouth Meeting, PA Abstract alternative values are not bona fide SAS missing values; consequently, a SAS procedure, expectedly, will include Questionnaires are notorious for containing these alternative values, thereby producing bogus results. responses that later become missing values during processing and analyses. Contrary to a non-response that Typically, you can re-code the original data, so results in a typical, bona fide, missing value, questions that the missing values become ordinary missing values, might allow several alternative responses, necessarily thus allowing SAS to process only appropriate values. Of outside the usual range of appropriate responses. course, there's a loss of information since the original Whether a question represents continuous or categorical missing values did not constitute a non-response. Also, data, a good questionnaire offers meaningful alternatives, such pre-processing might include a myriad of IF THEN / such as: "Refused to Answer" and, of course, the ELSE statements, which can be very tedious and time- quintessential "Don't Know." Traditionally, these consuming to write, and difficult to maintain. Thus, given alternative responses have numeric values such as 97, a questionnaire that has over a hundred variables with 998, or 9999 and, therefore, pose problems when trying varying levels of missing values, the task of re-coding to distinguish them from normal responses, especially these variables becomes very time consuming at best. when multiple missing values exist. This paper discusses Even worse, the permanent re-coding of alternative missing values in SAS and techniques that facilitate the responses to ordinary missing numeric values in SAS process of dealing with multi-valued, meaningful missing precludes categorical analysis that requires the inclusion values often found in questionnaires.

10 Questions Opinion Polls

Questions you might have on 10opinion polls 1. What is an opinion poll? An opinion poll is a survey carried out to measure views on a certain topic within a specific group of people. For example, the topic may relate to who Kenyans support in the presidential race, in which case, the group of people interviewed will be registered voters. 2. How are interviewees for an opinion poll selected? The group of people interviewed for an opinion poll is called a sample. As the name suggests, a sample is a group of people that represents the total population whose opinion is being surveyed. In a scientific opinion poll, everyone has an equal chance of being interviewed. 3. So how come I have never been interviewed for an opinion poll? You have the same chance of being polled as anyone else living in Kenya. However, chances of this are very small and are estimated at about 1 in 14,000. This is because there are approximately 14 million registered voters in Kenya and, for practical and cost reasons, usually only between 1,000 and 2,000 people are interviewed for each survey carried out. 4. How can such a small group be representative of the entire population? In order to ensure that the sample/survey group is representative of the population, the surveyors must ensure that the group reflects the characteristics of the whole. For instance, to get a general idea of who might win the Kenyan presidential election, only the views of registered voters in Kenya will be surveyed as these are the people who will be able to influence the election.

SAMPLING DESIGN & WEIGHTING in the Original

Appendix A 2096 APPENDIX A: SAMPLING DESIGN & WEIGHTING In the original National Science Foundation grant, support was given for a modified probability sample. Samples for the 1972 through 1974 surveys followed this design. This modified probability design, described below, introduces the quota element at the block level. The NSF renewal grant, awarded for the 1975-1977 surveys, provided funds for a full probability sample design, a design which is acknowledged to be superior. Thus, having the wherewithal to shift to a full probability sample with predesignated respondents, the 1975 and 1976 studies were conducted with a transitional sample design, viz., one-half full probability and one-half block quota. The sample was divided into two parts for several reasons: 1) to provide data for possibly interesting methodological comparisons; and 2) on the chance that there are some differences over time, that it would be possible to assign these differences to either shifts in sample designs, or changes in response patterns. For example, if the percentage of respondents who indicated that they were "very happy" increased by 10 percent between 1974 and 1976, it would be possible to determine whether it was due to changes in sample design, or an actual increase in happiness. There is considerable controversy and ambiguity about the merits of these two samples. Text book tests of significance assume full rather than modified probability samples, and simple random rather than clustered random samples. In general, the question of what to do with a mixture of samples is no easier solved than the question of what to do with the "pure" types.

Handbook of Recommended Practices for Questionnaire Development and Testing in the European Statistical System

Handbook of Recommended Practices for Questionnaire Development and Testing in the European Statistical System Release year: 2006 Authors: G. Brancato, S. Macchia, M. Murgia, M. Signore, G. Simeoni - Italian National Institute of Statistics, ISTAT K. Blanke, T. Körner, A. Nimmergut - Federal Statistical Office Germany, FSO P. Lima, R. Paulino - National Statistical Institute of Portugal, INE J.H.P. Hoffmeyer-Zlotnik - German Center for Survey Research and Methodology, ZUMA Version 1 Acknowledgements We are grateful to the experts from the network countries who supported us in all relevant stages of the work: Anja Ahola, Dirkjan Beukenhorst, Trine Dale, Gustav Haraldsen. We also thank all colleagues from European and overseas NSIs who helped us in understanding the current practices and in the review of the draft version of the handbook. Executive summary Executive Summary Questionnaires constitute the basis of every survey-based statistical measurement. They are by far the most important measurement instruments statisticians use to grasp the phenomena to be measured. Errors due to an insufficient questionnaire can hardly be compensated at later stages of the data collection process. Therefore, having systematic questionnaire design and testing procedures in place is vital for data quality, particularly for a minimisation of the measurement error. Against this background, the Directors General of the members of the European Statistical System (ESS) stressed the importance of questionnaire design and testing in the European Statistics Code of Practice, endorsed in February 2005. Principle 8 of the Code states that “appropriate statistical procedures, implemented from data collection to data validation, must underpin quality statistics.” One of the indicators referring to this principle requires that “questionnaires are systematically tested prior to the data collection.” Taking the Code of Practice as a starting point, this Recommended Practice Manual aims at further specifying the requirements of the Code of Practice.

Meta-Analysis of the Relationships Between Different Leadership Practices and Organizational, Teaming, Leader, and Employee Outcomes*

Journal of International Education and Leadership Volume 8 Issue 2 Fall 2018 http://www.jielusa.org/ ISSN: 2161-7252 Meta-Analysis of the Relationships Between Different Leadership Practices and Organizational, Teaming, Leader, and Employee Outcomes* Carl J. Dunst Orelena Hawks Puckett Institute Mary Beth Bruder University of Connecticut Health Center Deborah W. Hamby, Robin Howse, and Helen Wilkie Orelena Hawks Puckett Institute * This study was supported, in part, by funding from the U.S. Department of Education, Office of Special Education Programs (No. 325B120004) for the Early Childhood Personnel Center, University of Connecticut Health Center. The contents and opinions expressed, however, are those of the authors and do not necessarily reflect the policy or official position of either the Department or Office and no endorsement should be inferred or implied. The authors report no conflicts of interest. The meta-analysis described in this paper evaluated the relationships between 11 types of leadership practices and 7 organizational, teaming, leader, and employee outcomes. A main focus of analysis was whether the leadership practices were differentially related to the study outcomes. Studies were eligible for inclusion if the correlations between leadership subscale measures (rather than global measures of leadership) and outcomes of interest were reported. The random effects weighted average correlations between the independent and dependent measures were used as the sizes of effects for evaluating the influences of the leadership practices on the outcome measures. One hundred and twelve studies met the inclusion criteria and included 39,433 participants. The studies were conducted in 31 countries in different kinds of programs, organizations, companies, and businesses.

Summary of Human Subjects Protection Issues Related to Large Sample Surveys

Summary of Human Subjects Protection Issues Related to Large Sample Surveys U.S. Department of Justice Bureau of Justice Statistics Joan E. Sieber June 2001, NCJ 187692 U.S. Department of Justice Office of Justice Programs John Ashcroft Attorney General Bureau of Justice Statistics Lawrence A. Greenfeld Acting Director Report of work performed under a BJS purchase order to Joan E. Sieber, Department of Psychology, California State University at Hayward, Hayward, California 94542, (510) 538-5424, e-mail [email protected]. The author acknowledges the assistance of Caroline Wolf Harlow, BJS Statistician and project monitor. Ellen Goldberg edited the document. Contents of this report do not necessarily reflect the views or policies of the Bureau of Justice Statistics or the Department of Justice. This report and others from the Bureau of Justice Statistics are available through the Internet — http://www.ojp.usdoj.gov/bjs Table of Contents 1. Introduction 2 Limitations of the Common Rule with respect to survey research 2 2. Risks and benefits of participation in sample surveys 5 Standard risk issues, researcher responses, and IRB requirements 5 Long-term consequences 6 Background issues 6 3. Procedures to protect privacy and maintain confidentiality 9 Standard issues and problems 9 Confidentiality assurances and their consequences 21 Emerging issues of privacy and confidentiality 22 4. Other procedures for minimizing risks and promoting benefits 23 Identifying and minimizing risks 23 Identifying and maximizing possible benefits 26 5. Procedures for responding to requests for help or assistance 28 Standard procedures 28 Background considerations 28 A specific recommendation: An experiment within the survey 32 6.

Survey Experiments

IU Workshop in Methods – 2019 Survey Experiments Testing Causality in Diverse Samples Trenton D. Mize Department of Sociology & Advanced Methodologies (AMAP) Purdue University Survey Experiments Page 1 Survey Experiments Page 2 Contents INTRODUCTION . 8 Overview . 8 What is a survey experiment? . 9 What is an experiment. 10 Independent and dependent variables . 11 Experimental Conditions . 12 WHY CONDUCT A SURVEY EXPERIMENT? . 13 Internal, external, and construct validity .

Evaluating Survey Questions Question

What Respondents Do to Answer a Evaluating Survey Questions Question • Comprehend Question • Retrieve Information from Memory Chase H. Harrison Ph.D. • Summarize Information Program on Survey Research • Report an Answer Harvard University Problems in Answering Survey Problems in Answering Survey Questions Questions – Failure to comprehend – Failure to recall • If respondents don’t understand question, they • Questions assume respondents have information cannot answer it • If respondents never learned something, they • If different respondents understand question cannot provide information about it differently, they end up answering different questions • Problems with researcher putting more emphasis on subject than respondent Problems in Answering Survey Problems in Answering Survey Questions Questions – Problems Summarizing – Problems Reporting Answers • If respondents are thinking about a lot of things, • Confusing or vague answer formats lead to they can inconsistently summarize variability • If the way the respondent remembers something • Interactions with interviewers or technology can doesn’t readily correspond to the question, they lead to problems (sensitive or embarrassing may be inconsistemt responses) 1 Evaluating Survey Questions Focus Groups • Early stage • Qualitative research tool – Focus groups to understand topics or dimensions of measures • Used to develop ideas for questionnaires • Pre-Test Stage – Cognitive interviews to understand question meaning • Used to understand scope of issues – Pre-test under typical field

MRS Guidance on How to Read Opinion Polls

What are opinion polls? MRS guidance on how to read opinion polls June 2016 1 June 2016 www.mrs.org.uk MRS Guidance Note: How to read opinion polls MRS has produced this Guidance Note to help individuals evaluate, understand and interpret Opinion Polls. This guidance is primarily for non-researchers who commission and/or use opinion polls. Researchers can use this guidance to support their understanding of the reporting rules contained within the MRS Code of Conduct. Opinion Polls – The Essential Points What is an Opinion Poll? An opinion poll is a survey of public opinion obtained by questioning a representative sample of individuals selected from a clearly defined target audience or population. For example, it may be a survey of c. 1,000 UK adults aged 16 years and over. When conducted appropriately, opinion polls can add value to the national debate on topics of interest, including voting intentions. Typically, individuals or organisations commission a research organisation to undertake an opinion poll. The results to an opinion poll are either carried out for private use or for publication. What is sampling? Opinion polls are carried out among a sub-set of a given target audience or population and this sub-set is called a sample. Whilst the number included in a sample may differ, opinion poll samples are typically between c. 1,000 and 2,000 participants. When a sample is selected from a given target audience or population, the possibility of a sampling error is introduced. This is because the demographic profile of the sub-sample selected may not be identical to the profile of the target audience / population.

The Evidence from World Values Survey Data

Munich Personal RePEc Archive The return of religious Antisemitism? The evidence from World Values Survey data Tausch, Arno Innsbruck University and Corvinus University 17 November 2018 Online at https://mpra.ub.uni-muenchen.de/90093/ MPRA Paper No. 90093, posted 18 Nov 2018 03:28 UTC The return of religious Antisemitism? The evidence from World Values Survey data Arno Tausch Abstract 1) Background: This paper addresses the return of religious Antisemitism by a multivariate analysis of global opinion data from 28 countries. 2) Methods: For the lack of any available alternative we used the World Values Survey (WVS) Antisemitism study item: rejection of Jewish neighbors. It is closely correlated with the recent ADL-100 Index of Antisemitism for more than 100 countries. To test the combined effects of religion and background variables like gender, age, education, income and life satisfaction on Antisemitism, we applied the full range of multivariate analysis including promax factor analysis and multiple OLS regression. 3) Results: Although religion as such still seems to be connected with the phenomenon of Antisemitism, intervening variables such as restrictive attitudes on gender and the religion-state relationship play an important role. Western Evangelical and Oriental Christianity, Islam, Hinduism and Buddhism are performing badly on this account, and there is also a clear global North-South divide for these phenomena. 4) Conclusions: Challenging patriarchic gender ideologies and fundamentalist conceptions of the relationship between religion and state, which are important drivers of Antisemitism, will be an important task in the future. Multiculturalism must be aware of prejudice, patriarchy and religious fundamentalism in the global South.

Development and Factor Analysis of a Questionnaire to Measure Patient Satisfaction with Injected and Inhaled Insulin for Type 1 Diabetes

Epidemiology/Health Services/Psychosocial Research ORIGINAL ARTICLE Development and Factor Analysis of a Questionnaire to Measure Patient Satisfaction With Injected and Inhaled Insulin for Type 1 Diabetes JOSEPH C. CAPPELLERI, PHD, MPH IONE A. KOURIDES, MD system that permits noninvasive delivery of ROBERT A. GERBER, PHARMD, MA ROBERT A. GELFAND, MD rapid-acting insulin was developed (Inhale Therapeutics Systems, San Carlos, CA) that offers an effective and well-tolerated alternative to preprandial insulin injections in type 1 diabetes (3). Inhaled insulin, OBJECTIVE — To develop a self-administered questionnaire to address alternative deliv- intended for use in a preprandial therapeu- ery routes of insulin and to investigate aspects of patient satisfaction that may be useful for tic regimen, is the first practical alternative subsequent assessment and comparison of an inhaled insulin regimen and a subcutaneous to injections for therapeutic administration insulin regimen. of insulin. However, measures of treatment RESEARCH DESIGN AND METHODS — Attributes of patient treatment satisfaction satisfaction in diabetes have not directly with both inhaled and injected insulin therapy were derived from five qualitative research stud- examined delivery routes for insulin other ies to arrive at a 15-item questionnaire. Each item was analyzed on a five-point Likert scale so than by injection (e.g., syringe, pen, or that higher item scores indicated a more favorable attitude. There were 69 subjects with type 1 pump) and were developed at a time when diabetes previously taking injected insulin therapy who were enrolled in a phase II clinical trial. only injectable forms of insulin were readily Their baseline responses on the questionnaire were evaluated and subjected to an exploratory available in clinical practice (4–7).