Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 30 July 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

  • Utility Menu

University Logo

Harvard University Program on Survey Research

  • Questionnaire Design Tip Sheet

This PSR Tip Sheet provides some basic tips about how to write good survey questions and design a good survey questionnaire.

40 KB

PSR Resources

  • Managing and Manipulating Survey Data: A Beginners Guide
  • Finding and Hiring Survey Contractors
  • How to Frame and Explain the Survey Data Used in a Thesis
  • Overview of Cognitive Testing and Questionnaire Evaluation
  • Sampling, Coverage, and Nonresponse Tip Sheet
  • Introduction to Surveys for Honors Thesis Writers
  • PSR Introduction to the Survey Process
  • Related Centers/Programs at Harvard
  • General Survey Reference
  • Institutional Review Boards
  • Select Funding Opportunities
  • Survey Analysis Software
  • Professional Standards
  • Professional Organizations
  • Major Public Polls
  • Survey Data Collections
  • Major Longitudinal Surveys
  • Other Links

how to create a survey for research

Conducting Survey Research

Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them. The questionnaire, or survey, can be a written document that is completed by the person being surveyed, an online questionnaire, a face-to-face interview, or a telephone interview. Using surveys, it is possible to collect data from large or small populations (sometimes referred to as the universe of a study).

Different types of surveys are actually composed of several research techniques, developed by a variety of disciplines. For instance, interview began as a tool primarily for psychologists and anthropologists, while sampling got its start in the field of agricultural economics (Angus and Katona, 1953, p. 15).

Survey research does not belong to any one field and it can be employed by almost any discipline. According to Angus and Katona, "It is this capacity for wide application and broad coverage which gives the survey technique its great usefulness..." (p. 16).

Types of Surveys

Surveys come in a wide range of forms and can be distributed using a variety of media.

Mail Surveys

Group administered questionnaires, drop-off surveys, oral surveys, electronic surveys.

  • An Example Survey

Example Survey

General Instructions: We are interested in your writing and computing experiences and attitudes. Please take a few minutes to complete this survey. In general, when you are presented with a scale next to a question, please put an X over the number that best corresponds to your answer. For example, if you strongly agreed with the following question, you might put an X through the number 5. If you agreed moderately, you might put an X through number 4, if you neither agreed nor disagreed, you might put an X through number 3.

Example Question:

 

 

Strongly Disagree

Strongly Agree

I like to read magazines like TIME or Newsweek.

1

2

3

4

5

As is the case with all of the information we are collecting for our study, we will keep all the information you provide to us completely confidential. Your teacher will not be made aware of any of your responses. Thanks for your help.

Your Name: ___________________________________________________________

Your Instructor's Name: __________________________________________________

 

Expectations about Writing:

Very Little

     

Very Much

1. In general, how much writing do you think will be required in your classes at CSU?

1

2

3

4

5

2. How much writing do you think you will be required to do after you graduate?

1

2

3

4

5

3. How important do you think writing will be to your career?

1

2

3

4

5

 

Grades:

         

4. In this class, I expect to receive a grade of . . . .

A

B

C

D

F

5. In previous writing classes, I have usually received a grade of . . .

A

B

C

D

F

           

 

Attitudes about Writing:

Strongly Disagree

Strongly Agree

6. Good writers are born, not made.

1

2

3

4

5

7. I avoid writing.

1

2

3

4

5

8. Some people have said, "Writing can be learned but it can't be taught." Do you believe it can be learned?

1

2

3

4

5

9. Do you believe writing can be taught?

1

2

3

4

5

10. Practice is the most important part of being a good writer.

1

2

3

4

5

11. I am able to express myself clearly in my writing.

1

2

3

4

5

12. Writing is a lot of fun.

1

2

3

4

5

13. Good teachers can help me become a better writer.

1

2

3

4

5

14. Talent is the most important part of being a good writer.

1

2

3

4

5

15. Anyone with at least average intelligence can learn to be a good writer.

1

2

3

4

5

16. I am no good at writing.

1

2

3

4

5

17. I enjoy writing.

1

2

3

4

5

18. Discussing my writing with others is an enjoyable experience.

1

2

3

4

5

19. Compared to other students, I am a good writer.

1

2

3

4

5

20. Teachers who have read my writing think I am a good writer.

1

2

3

4

5

21. Other students who have read my writing think I am a good writer.

1

2

3

4

5

22. My writing is easy to understand.

1

2

3

4

5

 

Experiences in Previous Writing Classes:

Strongly Disagree

Strongly Agree

23. On some of my past writing assignments, I have been required to submit rough drafts of my papers.

1

2

3

4

5

24. I've taken some courses that focused primarily on spelling, grammar, and punctuation.

1

2

3

4

5

25. In previous writing classes, I've had to revise my papers.

1

2

3

4

5

26. Some of my former writing teachers were more interested in my ideas than in my spelling, punctuation, and grammar.

1

2

3

4

5

27. In some of my former writing classes, I've commented on other students' papers.

1

2

3

4

5

28. In some of my former writing classes, I spent a lot of time working in groups.

1

2

3

4

5

29. Some of my former teachers acted as though the most important part of writing was spelling, punctuation, and grammar.

1

2

3

4

5

           

Please indicate the TIMES PER MONTH or HOURS PER WEEK you engage in the following activities:

         
           

Writing Activities: How many TIMES PER MONTH do you ...

         

30. Write in your journal

0

1

2

3

4+

31. Write poetry on your own

0

1

2

3

4+

32. Write letters to friends or family

0

1

2

3

4+

33. Write fiction

0

1

2

3

4+

34. Write papers for class

0

1

2

3

4+

35. Write for publication

0

1

2

3

4+

           

Reading Activities: How many HOURS PER WEEK do you ...

         

36. Read the newspaper

0

1

2

3

4+

37. Read fiction for pleasure

0

1

2

3

4+

38. Read magazines

0

1

2

3

4+

39. Read for class

0

1

2

`3

4+

           

 

Attitudes about Computers:

Strongly Disagree

Strongly Agree

40. The challenge of learning about computers is exciting.

1

2

3

4

5

41. I am confident that I can learn computer skills.

1

2

3

4

5

42. Anyone can learn to use a computer if they are patient and motivated.

1

2

3

4

5

43. Learning to operate computers is like learning any new skill-- the more you practice, the better you become.

1

2

3

4

5

44. I feel apprehensive about working with computers.

1

2

3

4

5

45. I have difficulty in understanding the technical aspects of computers.

1

2

3

4

5

46. It scares me to think that I could cause the computer to destroy a large amount of information by hitting the wrong key.

1

2

3

4

5

47. You have to be a genius to understand all the special commands used by most computer programs.

1

2

3

4

5

48. If given the opportunity, I would like to learn about and use computers.

1

2

3

4

5

49. I have avoided computers because they are unfamiliar and somewhat intimidating to me.

1

2

3

4

5

50. I feel computers are necessary tools in both educational and work settings.

1

2

3

4

5

51. I own my own computer.

No

     

Yes

52. I don't own my own computer, but I regularly use my parents' or a friend's computer.

No

     

Yes

Written Surveys

Imagine that you are interested in exploring the attitudes college students have about writing. Since it would be impossible to interview every student on campus, choosing the mail-out survey as your method would enable you to choose a large sample of college students. You might choose to limit your research to your own college or university, or you might extend your survey to several different institutions. If your research question demands it, the mail survey allows you to sample a very broad group of subjects at small cost.

Strengths and Weaknesses of Mail Surveys

Cost: Mail surveys are low in cost compared to other methods of surveying. This type of survey can cost up to 50% less than the self-administered survey, and almost 75% less than a face-to-face survey (Bourque and Fielder 9). Mail surveys are also substantially less expensive than drop-off and group-administered surveys.

Convenience: Since many of these types of surveys are conducted through a mail-in process, the participants are able to work on the surveys at their leisure.

Bias: Because the mail survey does not allow for personal contact between the researcher and the respondent, there is little chance for personal bias based on first impressions to alter the responses to the survey. This is an advantage because if the interviewer is not likeable, the survey results will be unfavorably affected. However, this could be a disadvantage as well.

Sampling--internal link: It is possible to reach a greater population and have a larger universe (sample of respondents) with this type of survey because it does not require personal contact between the researcher and the respondents.

Low Response Rate: One of the biggest drawbacks to written survey, especially as it relates to the mail-in, self-administered method, is the low response rate. Compared to a telephone survey or a face-to-face survey, the mail-in written survey has a response rate of just over 20%.

Ability of Respondent to Answer Survey: Another problem with self-administered surveys is three-fold: assumptions about the physical ability, literacy level and language ability of the respondents. Because most surveys pull the participants from a random sampling, it is impossible to control for such variables. Many of those who belong to a survey group have a different primary language than that of the survey. They may also be illiterate or have a low reading level and therefore might not be able to accurately answer the questions. Along those same lines, persons with conditions that cause them to have trouble reading, such as dyslexia, visual impairment or old age, may not have the capabilities necessary to complete the survey.

Imagine that you are interested in finding out how instructors who teach composition in computer classrooms at your university feel about the advantages of teaching in a computer classroom over a traditional classroom. You have a very specific population in mind, and so a mail-out survey would probably not be your best option. You might try an oral survey, but if you are doing this research alone this might be too time consuming. The group administered questionnaire would allow you to get your survey results in one space of time and would ensure a very high response rate (higher than if you stuck a survey into each instructor's mailbox). Your challenge would be to get everyone together. Perhaps your department holds monthly technology support meetings that most of your chosen sample would attend. Your challenge at this point would be to get permission to use part of the weekly meeting time to administer the survey, or to convince the instructors to stay to fill it out after the meeting. Despite the challenges, this type of survey might be the most efficient for your specific purposes.

Strengths and Weaknesses of Group Administered Questionnaires

Rate of Response: This second type of written survey is generally administered to a sample of respondents in a group setting, guaranteeing a high response rate.

Specificity: This type of written survey can be very versatile, allowing for a spectrum of open and closed ended types of questions and can serve a variety of specific purposes, particularly if you are trying to survey a very specific group of people.

Weaknesses of Group Administered Questionnaires

Sampling: This method requires a small sample, and as a result is not the best method for surveys that would benefit from a large sample. This method is only useful in cases that call for very specific information from specific groups.

Scheduling: Since this method requires a group of respondents to answer the survey together, this method requires a slot of time that is convenient for all respondents.

Imagine that you would like to find out about how the dorm dwellers at your university feel about the lack of availability of vegetarian cuisine in their dorm dining halls. You have prepared a questionnaire that requires quite a few long answers, and since you suspect that the students in the dorms may not have the motivation to take the time to respond, you might want a chance to tell them about your research, the benefits that might come from their responses, and to answer their questions about your survey. To ensure the highest response rate, you would probably pick a time of the day when you are sure that the majority of the dorm residents are home, and then work your way from door to door. If you don't have time to interview the number of students you need in your sample, but you don't trust the response rate of mail surveys, the drop-off survey might be the best option for you.

Strengths and Weaknesses of Drop-off Surveys

Convenience: Like the mail survey, the drop-off survey allows the respondents to answer the survey at their own convenience.

Response Rates: The response rates for the drop-off survey are better than the mail survey because it allows the interviewer to make personal contact with the respondent, to explain the importance of the survey, and to answer any questions or concerns the respondent might have.

Time: Because of the personal contact this method requires, this method takes considerably more time than the mail survey.

Sampling: Because of the time it takes to make personal contact with the respondents, the universe of this kind of survey will be considerably smaller than the mail survey pool of respondents.

Response: The response rate for this type of survey, although considerably better than the mail survey, is still not as high as the response rate you will achieve with an oral survey.

Oral surveys are considered more personal forms of survey than the written or electronic methods. Oral surveys are generally used to get thorough opinions and impressions from the respondents.

Oral surveys can be administered in several different ways. For instance, in a group interview, as opposed to a group administered written survey, each respondent is not given an instrument (an individual questionnaire). Instead, the respondents work in groups to answer the questions together while one person takes notes for the whole group. Another more familiar form of oral survey is the phone survey. Phone surveys can be used to get short one word answers (yes/no), as well as longer answers.

Strengths and Weaknesses of Oral Surveys

Personal Contact: Oral surveys conducted either on the telephone or in person give the interviewer the ability to answer questions from the participant. If the participant, for example, does not understand a question or needs further explanation on a particular issue, it is possible to converse with the participant. According to Glastonbury and MacKean, "interviewing offers the flexibility to react to the respondent's situation, probe for more detail, seek more reflective replies and ask questions which are complex or personally intrusive" (p. 228).

Response Rate: Although obtaining a certain number of respondents who are willing to take the time to do an interview is difficult, the researcher has more control over the response rate in oral survey research than with other types of survey research. As opposed to mail surveys where the researcher must wait to see how many respondents actually answer and send back the survey, a researcher using oral surveys can, if the time and money are available, interview respondents until the required sample has been achieved.

Cost: The most obvious disadvantage of face-to-face and telephone survey is the cost. It takes time to collect enough data for a complete survey, and time translates into payroll costs and sometimes payment for the participants.

Bias: Using face-to-face interview for your survey may also introduce bias, from either the interviewer or the interviewee.

Types of Questions Possible: Certain types of questions are not convenient for this type of survey, particularly for phone surveys where the respondent does not have a chance to look at the questionnaire. For instance, if you want to offer the respondent a choice of 5 different answers, it will be very difficult for respondents to remember all of the choices, as well as the question, without a visual reminder. This problem requires the researcher to take special care in constructing questions to be read aloud.

Attitude: Anyone who has ever been interrupted during dinner by a phone interviewer is aware of the negative feelings many people have about answering a phone survey. Upon receiving these calls, many potential respondents will simply hang up.

With the growth of the Internet (and in particular the World Wide Web) and the expanded use of electronic mail for business communication, the electronic survey is becoming a more widely used survey method. Electronic surveys can take many forms. They can be distributed as electronic mail messages sent to potential respondents. They can be posted as World Wide Web forms on the Internet. And they can be distributed via publicly available computers in high-traffic areas such as libraries and shopping malls. In many cases, electronic surveys are placed on laptops and respondents fill out a survey on a laptop computer rather than on paper.

Strengths and Weaknesses of Electronic Surveys

Cost-savings: It is less expensive to send questionnaires online than to pay for postage or for interviewers.

Ease of Editing/Analysis: It is easier to make changes to questionnaire, and to copy and sort data.

Faster Transmission Time: Questionnaires can be delivered to recipients in seconds, rather than in days as with traditional mail.

Easy Use of Preletters: You may send invitations and receive responses in a very short time and thus receive participation level estimates.

Higher Response Rate: Research shows that response rates on private networks are higher with electronic surveys than with paper surveys or interviews.

More Candid Responses: Research shows that respondents may answer more honestly with electronic surveys than with paper surveys or interviews.

Potentially Quicker Response Time with Wider Magnitude of Coverage: Due to the speed of online networks, participants can answer in minutes or hours, and coverage can be global.

Sample Demographic Limitations: Population and sample limited to those with access to computer and online network.

Lower Levels of Confidentiality: Due to the open nature of most online networks, it is difficult to guarantee anonymity and confidentiality.

Layout and Presentation issues: Constructing the format of a computer questionnaire can be more difficult the first few times, due to a researcher's lack of experience.

Additional Orientation/Instructions: More instruction and orientation to the computer online systems may be necessary for respondents to complete the questionnaire.

Potential Technical Problems with Hardware and Software: As most of us (perhaps all of us) know all too well, computers have a much greater likelihood of "glitches" than oral or written forms of communication.

Response Rate: Even though research shows that e-mail response rates are higher, Opermann (1995) warns that most of these studies found response rates higher only during the first few days; thereafter, the rates were not significantly higher.

Designing Surveys

Initial planning of the survey design and survey questions is extremely important in conducting survey research. Once surveying has begun, it is difficult or impossible to adjust the basic research questions under consideration or the tool used to address them since the instrument must remain stable in order to standardize the data set. This section provides information needed to construct an instrument that will satisfy basic validity and reliability issues. It also offers information about the important decisions you need to make concerning the types of questions you are going to use, as well as the content, wording, order and format of your survey questionnaire.

Overall Design Issues

Four key issues should be considered when designing a survey or questionnaire: respondent attitude, the nature of the items (or questions) on the survey, the cost of conducting the survey, and the suitability of the survey to your research questions.

Respondent attitude: When developing your survey instrument, it is important to try to put yourself into your target population's shoes. Think about how you might react when approached by a pollster while out shopping or when receiving a phone call from a pollster while you are sitting down to dinner. Think about how easy it is to throw away a response survey that you've received in the mail. When developing your instrument, it is important to choose the method you think will work for your research, but also one in which you have confidence. Ask yourself what kind of survey you, as a respondent, would be most apt to answer.

Nature of questions: It is important to consider the relationship between the medium that you use and the questions that you ask. For instance, certain types of questions are difficult to answer over the telephone. Think of the problems you would have in attempting to record Likert scale responses, as in closed-ended questions, over the telephone--especially if a scale of more than five points is used. Responses to open-ended questions would also be difficult to record and report in telephone interviews.

Cost: Along with decisions about the nature of the questions you ask, expense issues also enter into your decision making when planning a survey. The population under consideration, the geographic distribution of this sample population, and the type of questionnaire used all affect costs.

Ability of instrument to meet needs of research question: Finally, there needs to be a logical link between your survey instrument and your research questions. If it is important to get a large number of responses from a broad sample of the population, you obviously will not choose to do a drop-off written survey or an in-person oral survey. Because of the size of the needed sample, you will need to choose a survey instrument that meets this need, such as a phone or mail survey. If you are interested in getting thorough information that might need a large amount of interaction between the interviewer and respondent, you will probably pick in-person oral survey with a smaller sample of respondents. Your questions, then, will need to reflect both your research goals and your choice of medium.

Creating Questionnaire Questions

Developing well-crafted questionnaires is more difficult than it might seem. Researchers should carefully consider the type, content, wording, and order of the questions that they include. In this section, we discuss the steps involved in questionnaire development and the advantages and disadvantages of various techniques.

Open-ended vs. Closed-ended Questions

All researchers must make two basic decisions when designing a survey--they must decide: 1) whether they are going to employ an oral, written, or electronic method, and 2) whether they are going to choose questions that are open or close-ended.

Closed-Ended Questions: Closed-ended questions limit respondents' answers to the survey. The participants are allowed to choose from either a pre-existing set of dichotomous answers, such as yes/no, true/false, or multiple choice with an option for "other" to be filled in, or ranking scale response options. The most common of the ranking scale questions is called the Likert scale question. This kind of question asks the respondents to look at a statement (such as "The most important education issue facing our nation in the year 2000 is that all third graders should be able to read") and then "rank" this statement according to the degree to which they agree ("I strongly agree, I somewhat agree, I have no opinion, I somewhat disagree, I strongly disagree").

Open-Ended Questions: Open-ended questions do not give respondents answers to choose from, but rather are phrased so that the respondents are encouraged to explain their answers and reactions to the question with a sentence, a paragraph, or even a page or more, depending on the survey. If you wish to find information on the same topic as asked above (the future of elementary education), but would like to find out what respondents would come up with on their own, you might choose an open-ended question like "What do you think is the most important educational issue facing our nation in the year 2000?" rather than the Likert scale question. Or, if you would like to focus on reading as the topic, but would still not like to limit the participants' responses, you might pose the question this way: "Do you think that the most important issue facing education is literacy? Explain your answer below."

Note: Keep in mind that you do not have to use close-ended or open-ended questions exclusively. Many researchers use a combination of closed and open questions; often researchers use close-ended questions in the beginning of their survey, then allow for more expansive answers once the respondent has some background on the issue and is "warmed-up."

Rating scales: ask respondents to rate something like an idea, concept, individual, program, product, etc. based on a closed ended scale format, usually on a five-point scale. For example, a Likert scale presents respondents with a series of statements rather than questions, and the respondents are asked to which degree they disagree or agree.

Ranking scales: ask respondents to rank a set of ideas or things, etc. For example, a researcher can provide respondents with a list of ice cream flavors, and then ask them to rank these flavors in order of which they like best, with the rank of "one" representing their favorite. These are more difficult to use than rating scales. They will take more time, and they cannot easily be used for phone surveys since they often require visual aids. However, since ranking scales are more difficult, they may actually increase appropriate effort from respondents.

Magnitude estimation scales: ask respondents to provide numeric estimation of answers. For example, respondents might be asked: "Since your least favorite ice cream flavor is vanilla, we'll give it a score of 10. If you like another ice cream 20 times more than vanilla, you'll give it a score of 200, and so on. So, compared to vanilla at a score of ten, how much do you like rocky road?" These scales are obviously very difficult for respondents. However, these scales have been found to help increase variance explanations over ordinal scaling.

Split or unfolding questions: begin by asking respondents a general question, and then follow up with clarifying questions.

Funneling questions: guide respondents through complex issues or concepts by using a series of questions that progressively narrow to a specific question. For example, researchers can start asking general, open-ended questions, and then move to asking specific, closed-ended, forced-choice questions.

Inverted funneling questions: ask respondents a series of questions that move from specific issues to more general issues. For example, researchers can ask respondents specific, closed-ended questions first and then ask more general, open-ended questions. This technique works well when respondents are not expected to be knowledgeable about a content area or when they are not expected to have an articulate opinion regarding an issue.

Factorial questions: use stories or vignettes to study judgment and decision-making processes. For example, a researcher could ask respondents: "You're in a dangerous, rapidly burning building. Do you exit the building immediately or go upstairs to wake up the other inhabitants?" Converse and Presser (1986) warn that little is known about how this survey question technique compares with other techniques.

The wording of survey questions is a tricky endeavor. It is difficult to develop shared meanings or definitions between researchers and the respondents, and among respondents.

In The Practice of Social Research , Keith Crew, a professor of Sociology at the University of Kentucky, cites a famous example of a survey gone awry because of wording problems. An interview survey that included Likert-type questions ranging from "very much" to "very little" was given in a small rural town. Although it would seem that these items would accurately record most respondents' opinions, in the colloquial language of the region the word "very" apparently has an idiomatic usage which is closer to what we mean by "fairly" or even "poorly." You can just imagine what this difference in definition did to the survey results (p. 271).

This, however, is an extreme case. Even small changes in wording can shift the answers of many respondents. The best thing researchers can do to avoid problems with wording is to pretest their questions. However, researchers can also follow some suggestions to help them write more effective survey questions.

To write effective questions, researchers need to keep in mind these four important techniques: directness, simplicity, specificity, and discreteness.

  • Questions should be written in a straightforward, direct language that is not caught up in complex rhetoric or syntax, or in a discipline's slang or lingo. Questions should be specifically tailored for a group of respondents.
  • Questions should be kept short and simple. Respondents should not be expected to learn new, complex information in order to answer questions.
  • Specific questions are for the most part better than general ones. Research shows that the more general a question is the wider the range of interpretation among respondents. To keep specific questions brief, researchers can sometimes use longer introductions that make the context, background, and purpose of the survey clear so that this information is not necessary to include in the actual questions.
  • Avoid questions that are overly personal or direct, especially when dealing with sensitive issues.

When considering the content of your questionnaire, obviously the most important consideration is whether the content of the questions will elicit the kinds of questions necessary to answer your initial research question. You can gauge the appropriateness of your questions by pretesting your survey, but you should also consider the following questions as you are creating your initial questionnaire:

  • Does your choice of open or close-ended questions lead to the types of answers you would like to get from your respondents?
  • Is every question in your survey integral to your intent? Superfluous questions that have already been addressed or are not relevant to your study will waste the time of both the respondents and the researcher.
  • Does one topic warrant more than one question?
  • Do you give enough prior information/context for each set of questions? Sometimes lead-in questions are useful to help the respondent become familiar and comfortable with the topic.
  • Are the questions both general enough (they are both standardized and relevant to your entire sample), and specific enough (avoid vague generalizations and ambiguousness)?
  • Is each question as succinct as it can be without leaving out essential information?
  • Finally, and most importantly, try to put yourself in your respondents' shoes. Write a survey that you would be willing to answer yourself, and be polite, courteous, and sensitive. Thank the responder for participating both at the beginning and the end of the survey.

Order of Questions

Although there are no general rules for ordering survey questions, there are still a few suggestions researchers can follow when setting up a questionnaire.

  • Pretesting can help determine if the ordering of questions is effective.
  • Which topics should start the survey off, and which should wait until the end of the survey?
  • What kind of preparation do my respondents need for each question?
  • Do the questions move logically from one to the next, and do the topics lead up to each other?

The following general guidelines for ordering survey questions can address these questions:

  • Use warm-up questions. Easier questions will ease the respondent into the survey and will set the tone and the topic of the survey.
  • Sensitive questions should not appear at the beginning of the survey. Try to put the responder at ease before addressing uncomfortable issues. You may also prepare the reader for these sensitive questions with some sort of written preface.
  • Consider transition questions that make logical links.
  • Try not to mix topics. Topics can easily be placed into "sets" of questions.
  • Try not to put the most important questions last. Respondents may become bored or tired before they get to the end of the survey.
  • Be careful with contingency questions ("If you answered yes to the previous question . . . etc.").
  • If you are using a combination of open and close-ended questions, try not to start your survey with open-ended questions. Respondents will be more likely to answer the survey if they are allowed the ease of closed-questions first.

Borrowing Questions

Before developing a survey questionnaire, Converse and Presser (1986) recommend that researchers consult published compilations of survey questions, like those published by the National Opinion Research Center and the Gallup Poll. This will not only give you some ideas on how to develop your questionnaire, but you can even borrow questions from surveys that reflect your own research. Since these questions and questionnaires have already been tested and used effectively, you will save both time and effort. However, you will need to take care to only use questions that are relevant to your study, and you will usually have to develop some questions on your own.

Advantages of Closed-Ended Questions

  • Closed-ended questions are more easily analyzed. Every answer can be given a number or value so that a statistical interpretation can be assessed. Closed-ended questions are also better suited for computer analysis. If open-ended questions are analyzed quantitatively, the qualitative information is reduced to coding and answers tend to lose some of their initial meaning. Because of the simplicity of closed-ended questions, this kind of loss is not a problem.
  • Closed-ended questions can be more specific, thus more likely to communicate similar meanings. Because open-ended questions allow respondents to use their own words, it is difficult to compare the meanings of the responses.
  • In large-scale surveys, closed-ended questions take less time from the interviewer, the participant and the researcher, and so is a less expensive survey method. The response rate is higher with surveys that use closed-ended question than with those that use open-ended questions.

Advantages of Open-Ended Questions

  • Open-ended questions allow respondents to include more information, including feelings, attitudes and understanding of the subject. This allows researchers to better access the respondents' true feelings on an issue. Closed-ended questions, because of the simplicity and limit of the answers, may not offer the respondents choices that actually reflect their real feelings. Closed-ended questions also do not allow the respondent to explain that they do not understand the question or do not have an opinion on the issue.
  • Open-ended questions cut down on two types of response error; respondents are not likely to forget the answers they have to choose from if they are given the chance to respond freely, and open-ended questions simply do not allow respondents to disregard reading the questions and just "fill in" the survey with all the same answers (such as filling in the "no" box on every question).
  • Because they allow for obtaining extra information from the respondent, such as demographic information (current employment, age, gender, etc.), surveys that use open-ended questions can be used more readily for secondary analysis by other researchers than can surveys that do not provide contextual information about the survey population.

Potential Problems with Survey Questions

While designing questions for a survey, researchers should to be aware of a few problems and how to avoid them:

"Everyone has an opinion": It is incorrect to assume that each respondent has an opinion regarding every question. Therefore, you might offer a "no opinion" option to avoid this assumption. Filters can also be created. For example, researchers can ask respondents if they have any thoughts on an issue, to which they have the option to say "no."

Agree and disagree statements: according to Converse and Presser (1986), these statements suffer from "acquiescence" or the tendency of respondents to agree despite question content (p.35). Researchers can avoid this problem by using forced-choice questions with these statements.

Response order bias: this occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate. Typically, the respondent chooses the last or first response option. This problem might occur if researchers use long lists and/or rating scales.

Response set: this problem can occur when using a close-ended question format with response options like yes/no or agree/disagree. Sometimes respondents do not consider each question and just answer no or disagree to all questions.

Telescoping: occurs when respondents report that an event took place more recently than it actually did. To avoid this problem, Frey and Mertens (1995) say researchers can use "aided recall"-using a reference point or landmark, or list of events or behaviors (p. 101).

Forward telescoping: occurs when respondents include events that have actually happened before the time frame established. This results in overreporting. According to Converse and Presser (1986), researchers can use "bounded recall" to avoid this problem (p.21). Bounded recall is when researchers interview respondents several months or so after the initial interview to inquire about events that have happened since then. This technique, however, requires more resources. Converse and Presser said that researchers can also just try to narrow the reference points used, which has been shown to reduce this problem too.

Fatigue effect: happens when respondents grow bored or tired during the interview. To avoid this problem, Frey and Mertens (1995) say researchers can use transitions, vary questions and response options, and they can put easy to answer questions at the end of the questionnaire.

Types of Questions to Avoid

  • Double-barreled questions- force respondents to make two decisions in one. For example, a question like: "Do you think women and children should be given the first available flu shots?" does not allow the responder to choose whether women or children should be given the first shots.
  • Double negative questions-for example: "Please tell me whether or not you agree or disagree with this statement. Graduate teaching assistants should not be required to help students outside of class." Respondents may confuse the meaning of the disagree option.
  • Hypothetical questions- are typically too difficult for respondents since they require more scrutiny. For example, "If there were a cure for cancer, would you still support euthanasia?"
  • Ambiguous questions- respondents might not understand the question.
  • Biased questions- For example, "Don't you think that suffering terminal cancer patients should be allowed to be released from their pain?" Researchers should never try to make one response option look more suitable than another.
  • Questions with long lists-these questions may tire respondents or respondents may lose track of the question.

Pretesting the Questionnaire

Ultimately, designing the perfect survey questionnaire is impossible. However, researchers can still create effective surveys. To determine the effectiveness of your survey questionnaire, it is necessary to pretest it before actually using it. Pretesting can help you determine the strengths and weaknesses of your survey concerning question format, wording and order.

There are two types of survey pretests: participating and undeclared .

  • Participating pretests dictate that you tell respondents that the pretest is a practice run; rather than asking the respondents to simply fill out the questionnaire, participating pretests usually involve an interview setting where respondents are asked to explain reactions to question form, wording and order. This kind of pretest will help you determine whether the questionnaire is understandable.
  • When conducting an undeclared pretest , you do not tell respondents that it is a pretest. The survey is given just as you intend to conduct it for real. This type of pretest allows you to check your choice of analysis and the standardization of your survey. According to Converse and Presser (1986), if researchers have the resources to do more than one pretest, it might be best to use a participatory pretest first, then an undeclared test.

General Applications of Pretesting:

Whether or not you use a participating or undeclared pretest, pretesting should ideally also test specifically for question variation, meaning, task difficulty, and respondent interest and attention. Your pretests should also include any questions you borrowed from other similar surveys, even if they have already been pretested, because meaning can be affected by the particular context of your survey. Researchers can also pretest the following: flow, order, skip patterns, timing, and overall respondent well-being.

Pretesting for reliability and validity:

Researchers might also want to pretest the reliability and validity of the survey questions. To be reliable, a survey question must be answered by respondents the same way each time. According to Weisberg et. al (1989), researchers can assess reliability by comparing the answers respondents give in one pretest with answers in another pretest. Then, a survey question's validity is determined by how well it measures the concept(s) it is intended to measure. Both convergent validity and divergent validity can be determined by first comparing answers to another question measuring the same concept, then by measuring this answer to the participant's response to a question that asks for the exact opposite answer.

For instance, you might include questions in your pretest that explicitly test for validity: if a respondent answers "yes" to the question, "Do you think that the next president should be a Republican?" then you might ask "What party do you think you might vote for in the next presidential election?" to check for convergent validity, then "Do you think that you will vote Democrat in the next election?" to check the answer for divergent validity.

Conducting Surveys

Once you have constructed a questionnaire, you'll need to make a plan that outlines how and to whom you will administer it. There are a number of options available in order to find a relevant sample group amongst your survey population. In addition, there are various considerations involved with administering the survey itself.

Administering a Survey

This section attempts to answer the question: "How do I go about getting my questionnaire answered?"

For all types of surveys, some basic practicalities need to be considered before the surveying begins. For instance, you need to find the most convenient time to carry out the data collection (this becomes particularly important in interview surveying and group-administered surveys), how long the data collection is likely to take. Finally, you need to make practical arrangements for administering the survey. Pretesting your survey will help you determine the time it takes to administer, process, and analyze your survey, and will also help you clear out some of the bugs.

Administering Written Surveys

Written surveys can be handled in several different ways. A research worker can deliver the questionnaires to the homes of the sample respondents, explain the study, and then pick the questionnaires up on a later date (or, alternately, ask the respondent to mail the survey back when completed). Another option is mailing questionnaires directly to homes and having researchers pick up and check the questionnaires for completeness in person. This method has proven to have higher response rates than straightforward mail surveys, although it tends to take more time and money to administer.

It is important to put yourself into the role of respondent when deciding how to administer your survey. Most of us have received and thrown away a mail survey, and so it may be useful to think back to the reasons you had for not filling it out and returning it. Here are some ideas for boosting your response rate:

  • Include in each questionnaire a letter of introduction and explanation, and a self-addressed, stamped envelope for returning the questionnaire.
  • Oftentimes, when it fits the study's budget, the envelope might also include a monetary "reward" (usually a dollar to five dollars) as an incentive to fill out the survey.
  • Another method for saving the responder time is to create a self-mailing questionnaire that requires no envelope but folds easily so that the return address appears on the outside. The easier you make the process of completing and returning the survey, the better your survey results will be.
  • Follow up mailings are an important part of administering mail surveys. Nonrespondents can be sent letters of additional encouragement to participate. Even better, a new copy of the survey can be sent to nonresponders. Methodological literature suggests that three follow up letters are adequate, and two to three weeks should be allowed between each mailing.

Administering Oral Surveys

Face-To-Face Surveys

Oftentimes conducting oral surveys requires a staff of interviewers; to control this variable as much as possible, the presentation and preparation of the interviewer is an important consideration.

  • In any face-to-face interview, the appearance of the interviewer is important. Since the success of any survey relies on the interest of the participants to respond to the survey, the interviewer should take care to dress and act in such a way that would not offend the general sample population.
  • Of equal importance is the preparedness of the interviewer. The interviewer should be well acquainted with the questions, and have ample practice administering the survey with mock interviews. If several interviewers will be used, they should be trained as a group to ensure standardization and control. Interviewers also need to carry a letter of identification/authentication to present at in-person surveys.

When actually administering the survey, you need to make decisions about how much of the participants' responses need to be recorded, how much the interviewer will need to "probe" for responses, and how much the interviewer will need to account for context (what is the respondent's age, race, gender, reaction to the study, etc.) If you are administering a close-ended question survey, these may not be considerations. On the other hand, when recording more open-ended responses, the researcher needs to decide beforehand on each of these factors:

  • It depends on the purpose of the study whether the interview should be recorded word for word, or whether the interviewer should record general impressions and opinions. However, for the sake of precision, the former approach is preferred. More information is always better than less when it comes to analyzing the results.
  • Sometimes respondents will respond to a question with an inappropriate answer; this can happen with both open and close-question surveys. Even if you give the participant structured choices like "I agree" or "I disagree," they might respond "I think that is true," which might require the interviewer to probe for an appropriate answer. In an open-question survey, this probing becomes more challenging. The interviewer might come with a set of potential questions if the respondent does not elaborate enough or strays from the subject. The nature of these probes, however, need to be constructed by the researcher rather than ad-libbed by the interviewers, and should be carefully controlled so that they do not lead the respondent to change answers.

Phone Surveys

Phone surveys certainly involve all of the preparedness of the face-to-face surveys, but encounter new problems because of their reputation. It is much easier to hang-up on a phone surveyor than it is to slam the door in someone's face, and so the sheer number of calls needed to complete a survey can be baffling. Computer innovation has tempered this problem a bit by allowing more for quick and random number dialing and the ability for interviewers to type answers programs that automatically set up the data for analysis. Systems like CATI (Computer-assisted survey interview) have made phone surveys a more cost and time effective method, and therefore a popular one, although respondents are getting more and more reluctant to answer phone surveys because of the increase in telemarketing.

Before conducting a survey, you must choose a relevant survey population. And, unless a survey population is very small, it is usually impossible to survey the entire relevant population. Therefore, researchers usually just survey a sample of a population from an actual list of the relevant population, which in turn is called a sampling frame . With a carefully selected sample, researchers can make estimations or generalizations regarding an entire population's opinions, attitudes or beliefs on a particular topic.

Sampling Procedures and Methods

There are two different types of sampling procedures-- probability and nonprobability . Probability sampling methods ensure that there is a possibility for each person in a sample population to be selected, whereas nonprobability methods target specific individuals. Nonprobability sampling methods include the following:

  • Purposive samples: to purposely select individuals to survey.
  • Volunteer subjects: to ask for volunteers to survey.
  • Haphazard sampling: to survey individuals who can be easily reached.
  • Quota sampling: to select individuals based on a set quota. For example, if a census indicates that more than half of the population is female, then the sample will be adjusted accordingly.

Clearly, there can be an inherent bias in nonprobability methods. Therefore, according to Weisberg, Krosnick, and Bowen (1989), it is not surprising that most survey researchers prefer probability sampling methods. Some commonly used probability sampling methods for surveys are:

  • Simple random sample: a sample is drawn randomly from a list of individuals in a population.
  • Systematic selection procedure sample: a variant of a simple random sample in which a random number is chosen to select the first individual and so on from there.
  • Stratified sample: dividing up the population into smaller groups, and randomly sampling from each group.
  • Cluster sample: dividing up a population into smaller groups, and then only sampling from one of the groups. Cluster sampling is " according to Lee, Forthofer, and Lorimer (1989), is considered a more practical approach to surveys because it samples by groups or clusters of elements rather than by individual elements" (p. 12). It also reduces interview costs. However, Weisberg et. al (1989) said accuracy declines when using this sampling method.
  • Multistage sampling: first, sampling a set of geographic areas. Then, sampling a subset of areas within those areas, and so on.

Sampling and Nonsampling Errors

Directly related to sample size are the concepts of sampling and nonsampling errors. According to Fox and Tracy (1986), surveys are subject to both sampling errors and nonsampling errors.

A sampling error arises from the fact that inevitably samples differ from their populations. Therefore, survey sample results should be seen only as estimations. Weisberg et. al. (1989) said sampling errors cannot be calculated for nonprobability samples, but they can be determined for probability samples. First, to determine sample error, look at the sample size. Then, look at the sampling fraction--the percentage of the population that is being surveyed. Thus, the more people surveyed, the smaller the error. This error can also be reduced, according to Fox and Tracy (1986), by increasing the representativeness of the sample.

Then, there are two different kinds of nonsampling error--random and nonrandom errors. Fox and Tracy (1986) said random errors decrease the reliability of measurements. These errors can be reduced through repeated measurements. Nonrandom errors result from a bias in survey data, which is connected to response and nonresponse bias.

Confidence Level and Interval

Any statement of sampling error must contain two essential components: the confidence level and the confidence interval. These two components are used together to express the accuracy of the sample's statistics in terms of the level of confidence that the statistics fall within a specified interval from the true population parameter. For example, a researcher may be "95 percent confident" that the sample statistic (that 50 percent favor candidate X) is within plus or minus 5 percentage points of the population parameter. In other words, the researcher is 95 percent confident that between 45 and 55 percent of the total population favor candidate X.

Lauer and Asher (1988) provide a table that gives the confidence interval limits for percentages based upon sample size (p. 58):

Sample Size and Confidence Interval Limits

(95% confidence intervals based on a population incidence of 50% and a large population relative to sample size.)

Confidence Limits and Sample Size

When selecting a sample size, one can consider that a higher number of individuals surveyed from a target group yields a tighter measurement, a lower number yields a looser range of confidence limits. The confidence limits may need to be corrected if, according to Lauer and Asher (1988), "the sample size starts to approach the population size" or if "the variable under scrutiny is known to have a much [original emphasis] smaller or larger occurrence than 50% in the whole population" (p. 59). For smaller populations, Singleton (1988) said the standard error or confidence interval should be multiplied by a correction factor equal to sqrt(1 - f), where "f" is the sampling fraction, or proportion of the population included in the sample.

Lauer and Asher (1988) give a table of correction factors for confidence limits where sample size is an important part of population size (p. 60) and also a table of correction factors for where the percentage incidence of the parameter in the population is not 50% (p. 61).

Tables for Calculating Confidence Limits vs. Sample Size

Correction Factors for Confidence Limits When Sample Size (n) Is an Important Part of Population Size (N >= 100)

(For n over 70% of N, take all of N)

From Lauer and Asher (1988, p. 60)

Correction Factors for Rare and Common Percentage of Variables

From Lauer and Asher (1988, p. 61)

Analyzing Survey Results

After creating and conducting your survey, you must now process and analyze the results. These steps require strict attention to detail and, in some cases, knowledge of statistics and computer software packages. How you conduct these steps will depend on the scope of your study, your own capabilities, and the audience to whom you wish to direct the work.

Processing the Results

It is clearly important to keep careful records of survey data in order to do effective work. Most researchers recommend using a computer to help sort and organize the data. Additionally, Glastonbury and MacKean point out that once the data has been filtered though the computer, it is possible to do an unlimited amount of analysis (p. 243).

Jolliffe (1986) believes that editing should be the first step to processing this data. He writes, "The obvious reason for this is to ensure that the data analyzed are correct and complete . At the same time, editing can reduce the bias, increase the precision and achieve consistency between the tables [regarding those produced by social science computer software] (p. 100). Of course, editing may not always be necessary, if for example you are doing a qualitative analysis of open-ended questions, or the survey is part of a larger project and gets distributed to other agencies for analysis. However, editing could be as simple as checking the information input into the computer.

All of this information should be used to test for statistical significance. See our guide on Statistics for more on this topic.

Information may be recorded in any number of ways. Charts and graphs are clear, visual ways to record findings in many cases. For instance, in a mail-out survey where response rate is an issue, you might use a response rate graph to make the process easier. The day the surveys are mailed out should be recorded first. Then, every day thereafter, the number of returned questionnaires should be logged on the graph. Be sure to record both the number returned each day, and the cumulative number, or percentage. Also, as each completed questionnaire is returned, each should be opened, scanned and assigned an identification number.

Analyzing the Results

Before actually beginning the survey the researcher should know how they want to analyze the data. As stated in the Processing the Results section, if you are collecting quantifiable data, a code book is needed for interpreting your data and should be established prior to collecting the survey data. This is important because there are many different formulas needed in order to properly analyze the survey research and obtain statistical significance. Since computer programs have made the process of analyzing data vastly easier than it was, it would be sensible to choose this route. Be sure to pick your program before you design your survey - - some programs require the data to be laid out in different ways.

After the survey is conducted and the data collected, the results must be assembled in some useable format that allows comparison within the survey group, between groups, or both. The results could be analyzed in a number of ways. A T-test may be used to determine if scores of two groups differ on a single variable--whether writing ability differs among students in two classrooms, for instance. A matched T-Test could also be applied to determine if scores of the same participants in a study differ under different conditions or over time. An ANOVA could be applied if the study compares multiple groups on one or more variables. Correlation measurements could also be constructed to compare the results of two interacting variables within the data set.

Secondary Analysis

Secondary analysis of survey data is an accepted methodology which applies previously collected survey data to new research questions. This methodology is particularly useful to researchers who do not have the time or money to conduct an extensive survey, but may be looking at questions for which some large survey has already collected relevant data. A number of books and chapters have been written about this methodology, some of which are listed in the annotated bibliography under "Secondary Analysis."

Advantages and Disadvantages of Using Secondary Analysis

  • Considerably cheaper and faster than doing original studies
  • You can benefit from the research from some of the top scholars in your field, which for the most part ensures quality data.
  • If you have limited funds and time, other surveys may have the advantage of samples drawn from larger populations.
  • How much you use previously collected data is flexible; you might only extract a few figures from a table, you might use the data in a subsidiary role in your research, or even in a central role.
  • A network of data archives in which survey data files are collected and distributed is readily available, making research for secondary analysis easily accessible.

Disadvantages

  • Since many surveys deal with national populations, if you are interested in studying a well-defined minority subgroup you will have a difficult time finding relevant data.
  • Secondary analysis can be used in irresponsible ways. If variables aren't exactly those you want, data can be manipulated and transformed in a way that might lessen the validity of the original research.
  • Much research, particularly of large samples, can involve large data files and difficult statistical packages.

Data-entry Packages Available for Survey Data Analysis

SNAP: Offers simple survey analysis, is able to help with the survey from start to finish, including the designing of questions and questionnaires.

SPSS: Statistical package for social sciences; can cope with most kinds of data.

SAS: A flexible general purpose statistical analysis system.

MINITAB: A very easy-to-use and fairly limited general purpose package for "beginners."

STATGRAPHS: General interactive statistical package with good graphics but not very flexible.

Reporting Survey Results

The final stage of the survey is to report your results. There is not an established format for reporting a survey's results. The report may follow a pattern similar to formal experimental write-ups, or the analysis may show up in pitches to advertising agencies--as with Arbitron data--or the analysis may be presented in departmental meetings to aid curriculum arguments. A formal report might contain contextual information, a literature review, a presentation of the research question under investigation, information on survey participants, a section explaining how the survey was conducted, the survey instrument itself, a presentation of the quantified results, and a discussion of the results.

You can choose to graphically represent your data for easier interpretation by others outside your research project. You can use, for example, bar graphs, histograms, frequency polygrams, pie charts and consistency tables.

Commentary on Survey Research

In this section, we present several commentaries on survey research.

Strengths and Weaknesses of Surveys

  • Surveys are relatively inexpensive (especially self-administered surveys).
  • Surveys are useful in describing the characteristics of a large population. No other method of observation can provide this general capability.
  • They can be administered from remote locations using mail, email or telephone.
  • Consequently, very large samples are feasible, making the results statistically significant even when analyzing multiple variables.
  • Many questions can be asked about a given topic giving considerable flexibility to the analysis.
  • There is flexibilty at the creation phase in deciding how the questions will be administered: as face-to-face interviews, by telephone, as group administered written or oral survey, or by electonic means.
  • Standardized questions make measurement more precise by enforcing uniform definitions upon the participants.
  • Standardization ensures that similar data can be collected from groups then interpreted comparatively (between-group study).
  • Usually, high reliability is easy to obtain--by presenting all subjects with a standardized stimulus, observer subjectivity is greatly eliminated.

Weaknesses:

  • A methodology relying on standardization forces the researcher to develop questions general enough to be minimally appropriate for all respondents, possibly missing what is most appropriate to many respondents.
  • Surveys are inflexible in that they require the initial study design (the tool and administration of the tool) to remain unchanged throughout the data collection.
  • The researcher must ensure that a large number of the selected sample will reply.
  • It may be hard for participants to recall information or to tell the truth about a controversial question.
  • As opposed to direct observation, survey research (excluding some interview approaches) can seldom deal with "context."

Reliability and Validity

Surveys tend to be weak on validity and strong on reliability. The artificiality of the survey format puts a strain on validity. Since people's real feelings are hard to grasp in terms of such dichotomies as "agree/disagree," "support/oppose," "like/dislike," etc., these are only approximate indicators of what we have in mind when we create the questions. Reliability, on the other hand, is a clearer matter. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations. Careful wording, format, content, etc. can reduce significantly the subject's own unreliability.

Ethical Considerations of Using Electronic Surveys

Because electronic mail is rapidly becoming such a large part of our communications system, this survey method deserves special attention. In particular, there are four basic ethical issues researchers should consider if they choose to use email surveys.

Sample Representatives: Since researchers who choose to do surveys have an ethical obligation to use population samples that are inclusive of race, gender, educational and income levels, etc., if you choose to utilize e-mail to administer your survey you face some serious problems. Individuals who have access to personal computers, modems and the Internet are not necessarily representative of a population. Therefore, it is suggested that researchers not use an e-mail survey when a more inclusive research method is available. However, if you do choose to do an e-mail survey because of its other advantages, you might consider including as part of your survey write up a reminder of the limitations of sample representativeness when using this method.

Data Analysis: Even though e-mail surveys tend to have greater response rates, researchers still do not necessarily know exactly who has responded. For example, some e-mail accounts are screened by an unintended viewer before they reach the intended viewer. This issue challenges the external validity of the study. According to Goree and Marszalek (1995), because of this challenge, "researchers should avoid using inferential analysis for electronic surveys" (p. 78).

Confidentiality versus Anonymity: An electronic response is never truly anonymous, since researchers know the respondents' e-mail addresses. According to Goree and Marszalek (1995), researchers are ethically required to guard the confidentiality of their respondents and to assure respondents that they will do so.

Responsible Quotation: It is considered acceptable for researchers to correct typographical or grammatical errors before quoting respondents since respondents do not have the ability to edit their responses. According to Goree and Marszalek (1995), researchers are also faced with the problem of "casual language" use common to electronic communication (p. 78). Casual language responses may be difficult to report within the formal language used in journal articles.

Response Rate Issues

Each year, nonresponse and response rates are becoming more and more important issues in survey research. According to Weisberg, Krosnick and Bowen (1989), in the 1950s it was not unusual for survey researchers to obtain response rates of 90 percent. Now, however, people are not as trusting of interviewers and response rates are much lower--typically 70 percent or less. Today, even when survey researchers obtain high response rates, they still have to deal with many potential respondent problems.

Nonresponse Issues

Nonresponse Errors Nonresponse is usually considered a source of bias in a survey, aptly called nonresponse bias . Nonresponse bias is a problem for almost every survey as it arises from the fact that there are usually differences between the ideal sample pool of respondents and the sample that actually responds to a survey. According to Fox and Tracy (1986), "when these differences are related to criterion measures, the results may be misleading or even erroneous" (p. 9). For example, a response rate of only 40 or 50 percent creates problems of bias since the results may reflect an inordinate percentage of a particular demographic portion of the sample. Thus, variance estimates and confidence intervals become greater as the sample size is reduced, and it becomes more difficult to construct confidence limits.

Nonresponse bias usually cannot be avoided and so inevitably negatively affects most survey research by creating errors in a statistical measurement. Researchers must therefore account for nonresponse either during the planning of their survey or during the analysis of their survey results. If you create a larger sample during the planning stage, confidence limits may be based on the actual number of responses themselves.

Household-Level Determinants of Nonresponse

According to Couper and Groves (1996), reductions in nonresponse and its errors should be based on a theory of survey participation. This theory of survey participation argues that a person's decision to participate in a survey generally occurs during the first moments of interaction with an interviewer or the text. According to Couper and Groves, four types of influences affect a potential respondent's decision of whether or not to cooperate in a survey. First, potential respondents are influenced by two factors that the researcher cannot control: by their social environments and by their immediate households. Second, potential respondents are influenced by two factors the researcher can control: the survey design and the interviewer.

To minimize nonresponse, Couper and Groves suggest that researchers manipulate the two factors they can control--the survey design and the interviewer.

Response Issues

Not only do survey researchers have to be concerned about nonresponse rate errors, but they also have to be concerned about the following potential response rate errors:

  • Response bias occurs when respondents deliberately falsify their responses. This error greatly jeopardizes the validity of a survey's measurements.
  • Response order bias occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate.
  • Response set bias occurs when respondents do not consider each question and just answer all the questions with the same response. For example, they answer "disagree" or "no" to all questions.

These response errors can seriously distort a survey's results. Unfortunately, according to Fox and Tracy (1986), response bias is difficult to eliminate; even if the same respondent is questioned repeatedly, he or she may continue to falsify responses. Response order bias and response set errors, however, can be reduced through careful development of the survey questionnaire.

Satisficing

Related to the issue of response errors, especially response order bias and response bias, is the issue of satisficing. According to Krosnick, Narayan, and Smith (1996) satisficing is the notion that certain survey response patterns occur as respondents "shortcut the cognitive processes necessary for generating optimal answers" (p. 29). This theoretical perspective arises from the belief that most respondents are not highly motivated to answer a survey's questions, as reflected in the declining response rates in recent years. Since many people are reluctant to be interviewed, it is presumptuous to assume that respondents will devote a lot of effort to answering a survey.

The theoretical notion of satisficing can be further understood by considering what respondents must do to provide optimal answers. According to Krosnick et. al. (1996), "respondents must carefully interpret the meaning of each question, search their memories extensively for all relevant information, integrate that information carefully into summary judgments, and respond in ways that convey those judgments' meanings as clearly and precisely as possible"(p. 31). Therefore, satisficing occurs when one or more of these cognitive steps is compromised.

Satisficing takes two forms: weak and strong . Weak satisficing occurs when respondents go through all of the cognitive steps necessary to provide optimal answers, but are not as thorough in their cognitive processing. For example, respondents can answer a question with the first response that seems acceptable instead of generating an optimal answer. Strong satisficing, on the other hand, occurs when respondents omit the steps of judgment and retrieval altogether.

Even though they believe that not enough is known yet to offer suggestions on how to increase optimal respondent answers, Krosnick et. al. (1996) argue that satisficing can be reduced by maximizing "respondent motivation" and by "minimizing task difficulty" in the survey questionnaire (p. 43).

Annotated Bibliography

General Survey Information:

Allan, Graham, & Skinner, Chris (eds.) (1991). Handbook for Research Students in the Social Sciences. The Falmer Press: London.

This book is an excellent resource for anyone studying in the social sciences. It is not only well-written, but it is clear and concise with pertinent research information.

Alreck, P. L., & Settle, R. B. (1995 ). The survey research handbook: Guidelines and strategies for conducting a survey (2nd). Burr Ridge, IL: Irwin.

Provides thorough, effective survey research guidelines and strategies for sponsors, information seekers, and researchers. In a very accessible, but comprehensive, format, this handbook includes checklists and guidelists within the text, bringing together all the different techniques and principles, skills and activities to do a "really effective survey."

Babbie, E.R. (1973). Survey research methods . Belmont, CA: Wadsworth.

A comprehensive overview of survey methods. Solid basic textbook on the subject.

Babbie, E.R. (1995). The practice of social research (7th). Belmont, CA: Wadsworth.

The reference of choice for many social science courses. An excellent overview of question construction, sampling, and survey methodology. Includes a fairly detailed critique of an example questionnaire. Also includes a good overview of statistics related to sampling.

Belson, W.A. (1986). Validity in survey research . Brookvield, VT: Gower.

Emphasis on construction of survey instrument to account for validity.

Bourque, Linda B. & Fiedler, Eve P. (1995). How to Conduct Self-Administered and Mail Surveys. Sage Publications: Thousand Oaks.

Contains current information on both self-administered and mail surveys. It is a great resource if you want to design your own survey; there are step-by-step methods for conducting these two types of surveys.

Bradburn, N.M., & Sudman, S. (1979). Improving interview method and questionnaire design . San Francisco: Jossey-Bass Publishers.

A good overview of polling. Includes setting up questionnaires and survey techniques.

Bradburn, N. M., & Sudman, S. (1988). Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass Publishers.

These veteran survey researchers answer questions about survey research that are commonly asked by the general public.

Campbell, Angus, A., ∧ Katona, Georgia. (1953). The Sample Survey: A Technique for Social Science Research. In Newcomb, Theodore M. (Ed). Research Methods in the Behavioral Sciences. The Dryden Press: New York. p 14-55.

Includes information on all aspects of social science research. Some chapters in this book are outdated.

Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the standardized questionnaire . Newbury Park, CA: Sage.

A very helpful little publication that addresses the key issues in question construction.

Dillman, D.A. (1978). Mail and telephone surveys: The total design method . New York: John Wiley & Sons.

An overview of conducting telephone surveys.

Frey, James H., & Oishi, Sabine Mertens. (1995). How To Conduct Interviews By Telephone and In Person. Sage Publications: Thousand Oaks.

This book has a step-by-step breakdown of how to conduct and design telephone and in person interview surveys.

Fowler, Floyd J., Jr. (1993). Survey Research Methods (2nd.). Newbury Park, CA: Sage.

An overview of survey research methods.

Fowler, F. J. Jr., & Mangione, T. W. (1990). Standardized survey interviewing: Minimizing interviewer-related error . Newbury Park, CA: Sage.

Another aspect of validity/reliability--interviewer error.

Fox, J. & Tracy, P. (1986). Randomized Response: A Method for Sensitive Surveys . Beverly Hills, CA: Sage.

Authors provide a good discussion of response issues and methods of random response, especially for surveys with sensitive questions.

Frey, J. H. (1989). Survey research by telephone (2nd). Newbury Park, CA: Sage.

General overview to telephone polling.

Glock, Charles (ed.) (1967). Survey Research in the Social Sciences. New York: Russell Sage Foundation.

Although fairly outdated, this collection of essays is useful in illustrating the somewhat different ways in which different disciplines regard and use survey research.

Hoinville, G. & Jowell, R. (1978). Survey research practice . London: Heinemann.

Practical overview of the methods and procedures of survey research, particularly discussing problems which may arise.

Hyman, H. H. (1972). Secondary Analysis of Sample Surveys. New York: John Wiley & Sons.

This source is particularly useful for anyone attempting to do secondary analysis. It offers a comprehensive overview of this research method, and couches it within the broader context of social scientific research.

Hyman, H. H. (1955). Survey design and analysis: Principles, cases, and procedures . Glencoe, IL: Free Press.

According to Babbie, an oldie but goodie--a classic.

Jones, R. (1985). Research methods in the social and behavioral sciences . Sunderland, MA: Sinauer.

General introduction to methodology. Helpful section on survey research, especially the discussion on sampling.

Kalton, G. (1983). Compensating for missing survey data . Ann Arbor, MI: Survey Research Center, Institute for Social Research, the University of Michigan.

Addresses a problem often encountered in survey methodology.

Kish, L. (1965). Survey sampling . New York: John Wiley & Sons.

Classic text on sampling theories and procedures.

Lake, C.C., & Harper, P. C. (1987). Public opinion polling: A handbook for public interest and citizen advocacy groups . Washington, D.C.: Island Press.

Clearly written easy to read and follow guide for planning, conducting and analyzing public surveys. Presents material in a step-by-step fashion, including checklists, potential pitfalls and real-world examples and samples.

Lauer, J.M., & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford UP.

Excellent overview of a number of research methodologies applicable to composition studies. Includes a chapter on "Sampling and Surveys" and appendices on basic statistical methods and considerations.

Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Applied Social Research: Tool for the Human Services (2nd). Fort Worth, TX: Holt.

A good basic general research textbook which also includes sections on minority issues when doing research and the analysis of "available" or secondary data..

Rea, L. M., & Parker, R. A. (1992). Designing and conducting survey research: A comprehensive guide . San Francisco: Jossey-Bass.

Written for the social and behavioral sciences, public administration, and management.

Rossi, P.H., Wright, J.D., & Anderson, A.B. (eds.) (1983). Handbook of survey research . New York: Academic Press.

Handbook of quantitative studies in social relations.

Salant, P., & Dillman, D. A. (1994). How to conduct your own survey . New York: Wiley.,

A how-to book written for the social sciences.

Sayer, Andrew. (1992). Methods In Social Science: A Realist Approach. Routledge: London and New York.

Gives a different perspective on social science research.

Schuldt, Barbara A., & Totter, Jeff W. (1994, Winter). Electronic Mail vs. Mail Survey Response Rates. Marketing Research, 6. 36-39.

An article with specific information for electronic and mail surveys. Mainly a technical resource.

Schuman, H. & Presser, S. (1981). Questions and answers in attitude surveys . New York: Academic Press.

Detailed analysis of research question wording and question order effects on respondents.

Schwartz, N. & Seymour, S. (1996) Answering Questions: Methodology for Determining Cognitive and Communication Processes in Survey Research. San Francisco: Josey-Bass.

Authors provide a summary of the latest research methods used for analyzing interpretive cognitive and communication processes in answering survey questions.

Seymour, S., Bradburn, N. & Schwartz, N. (1996) Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Josey-Bass.

Explores the survey as a "social conversation" to investigate what answers mean in relation to how people understand the world and communicate.

Simon, J. (1969). Basic research methods in social science: The art of empirical investigation. New York: Random .

An excellent discussion of survey analysis. The definitions and descriptions begin from a fairly understandable (simple) starting point, then the discussion unfolds to cover some fairly complex interpretive strategies.

Singleton, R. Jr., et. al. (1988). Approaches to social research . New York: Oxford UP.

Has a very accessible chapter on sampling as well as a chapter on survey research.

Smith, Robert B. (Ed.) (1982). A Handbook of Social Science Methods, Volume 3. Prayer: New York.

There is a series of handbooks, each one with specific topics in social science research. A good technical resource, yet slightly dated.

Sul Lee, E., Forthofer, R.N.,& Lorimor, R.J. (1989). Analyzing complex survey data . Newbury Park, CA: Sage Publications.

Details on the statistical analysis of survey data.

Singer, E., & Presser, S., eds. (1989). Survey research methods: A reader . Chicago: U of Chicago P.

The essays in this volume originally appeared in various issues of Public Opinion Quarterly.

Survey Research Center (1983). Interviewer's manual . Ann Arbor, MI: University of Michigan Press.

Very practical, step-by-step guide to conducting a survey and interview with lots of examples to illustrate the process.

Pearson, R.W., &Borouch, R.F. (Eds.) (1986). Survey Research Design: Towards a Better Understanding of Their Costs and Benefits. Springer-Verag: Berlin.

Explains, in a technical fashion, the financial aspects of research design. Somewhat of a cost-analysis book.

Weissberg, H.F., Krosnick , J.A., & Bowen, B.D. (1989). An introduction to survey research and data analysis . Glenview, IL: Scott Foresman.

A good discussion of basic analysis and statistics, particularly what statistical applications are appropriate for particular kinds of data.

Anderson, B., Puur, A., Silver, B., Soova, H., & Voormann, R. (1994). Use of a lottery as an incentive for survey participation: a pilot survey in Estonia. International Journal of Public Opinion Research, 6 , 64-71.

Looks at return results in a study that offers incentives, and recommends incentive use to increase response rates.

Bare, J. (1994). Truth about daily fluctuations in 1992 pre-election polls. Newspaper Research Journal, 15, 73-81.

Comparison of variations between daily poll results of the major polls used during the 1992 American Presidential race.

Chi, S. (1993). Computer knowledge, interests, attitudes, and uses among faculty in two teachers' universities in China. DAI-A, 54/12 , 4412-4623.

Survey indicating a strong link between subject area and computer usage.

Cowans, J. (1994). Wielding the people: Opinion polls and the problem of legitimacy in France since 1944. DAI-A, 54/12 , 4556-5027.

Study looks at how the advent of opinion polling has affected the legitimacy of French governments since World War II.

Crewe, I. (1993). A nation of liars? Opinion polls and the 1992 election. Journal of the Market Research Society, 35 , 341-359.

Poses possible reasons the British polls were so wrong in predicting the outcomes of the 1992 national elections.

Daly, J., & Miller, M. (1975). The empirical development of an instrument to measure writing apprehension. Research in the teaching of English , 9 (3), 242-249.

Discussion of basics in question development and data analysis. Also includes some sample questions.

Daniell, S. (1993). Graduate teaching assistants' attitudes toward and responses to academic dishonesty. DAI-A,54/06, 2065- 2257.

Study explores the ethical and academic responses to cheating, using a large survey tool.

Mittal, B. (1994). Public assessment of TV advertising: Faint praise and harsh criticism. Journal of Advertising Research, 34, 35-53.

Results of a survey of Southern U.S. television viewers' perceptions of television advertisements.

Palmquist, M., & Young, R.E. (1992). Is writing a gift? The impact on students who believe it is. Reading empirical research studies: The rhetoric of research . Hayes et al. eds. Hillsdale NJ: Erlbaum.

This chapter presents results of a study of student beliefs about writing. Includes sample questions and data analysis.

Serow, R. C., & Bitting, P. F. (1995). National service as educational reform: A survey of student attitudes. Journal of research and development in education , 28 (2), 87-90.

This study assessed college students' attitude toward a national service program.

Stouffer, Samuel. (1955). Communism, Conformity, and Civil Liberties. New York: John Wiley & Sons.

This is a famous old survey worth examining. This survey examined the impact of McCarthyism on the attitudes of both the general public and community leaders, a asking whether the repression of the early 1950s affected support for civil liberties.

Wanta, W. & Hu, Y. (1993). The agenda-setting effects of international news coverage: An examination of differing news frames. International Journal of Public Opinion Research, 5, 250-264.

Discusses results of Gallup polls on important problems in relation to the news coverage of international news.

Worcester, R. (1992). The performance of the political opinion polls in the 1992 British general election. Marketing and Research Today, 20, 256-263.

A critique of the use of polls in an attempt to predict voter actions.

Yamada, S, & Synodinos, N. (1994). Public opinion surveys in Japan. International Journal of Public Opinion Research, 6 , 118-138.

Explores trends in opinion poll usage, response rates, and refusals in Japanese polls from 1975 to 1990.

Criticism/Critique/Evaluation:

Bangura, A. K. (1992). The limitations of survey research methods in assessing the problem of minority student retention in higher education . San Francisco: Mellen Research UP.

Case study done at a Maryland university addressing an aspect of validity involving intercultural factors.

Bateson, N. (1984). Data construction in social surveys. London: Allen & Unwin.

Tackles the theory of the method (but not the methods of the method) of data construction. Deals with validity of the data by validizing the process of data construction.

Braverman, M. (1996). Sources of Survey Error: Implications for Evaluation Studies. New Directions for Evaluation: Advances in Survey Research ,70, 17-28.

Looks at how evaluations using surveys can benefit from using survey design methods that reduce various survey errors.

Brehm, J. (1994). Stubbing our toes for a foot in the door? Prior contact, incentives and survey response. International Journal of Public Opinion Research, 6 , 45-63.

Considers whether incentives or the original contact letter lead to increased response rates.

Bulmer, M. (1977). Social-survey research. In M. Bulmer (ed.), Sociological research methods: An introduction . London: Macmillan.

The section includes discussions of pros and cons of survey research findings, inferences and interpreting relationships found in social-survey analysis.

Couper, M. & Groves, R. (1996). Household-Level Determinants of Survey Nonresponse. . New Directions for Evaluation: Advances in Survey Research , 70, 63-80.

Authors discuss their theory of survey participation. They believe that decisions to participate are based on two occurences: interactions with the interviewer, and the sociodemographic characteristics of respondents.

Couto, R. (1987). Participatory research: Methodology and critique. Clinical Sociology Review, 5 , 83-90.

Criticism of survey research. Addresses knowledge/power/change issues through the critique.

Dillman, D., Sangster, R., Tarnai, J., & Rockwood, T. (1996) Understanding Differences in People's Answers to Telephone and Mail Surveys. New Directions for Evaluation: Advances in Survey Research , 70, 45-62.

Explores the issue of differences in respondents' answers in telephone and mail surveys, which can affect a survey's results.

Esaiasson, P. & Granberg, D. (1993). Hidden negativism: Evaluation of Swedish parties and their leaders under different survey methods. International Journal of Public Opinion Research, 5, 265-277.

Compares varying results of mailed questionnaires vs. telephone and personal interviews. Findings indicate methodology affected results.

Guastello, S. & Rieke, M. (1991). A review and critique of honesty test research. Behavioral Sciences and the Law, 9, 501-523.

Looks at the use of honesty, or integrity, testing to predict theft by employees, questioning further use of the tests due to extremely low validity. Social and legal implications are also considered.

Hamilton, R. (1991). Work and leisure: On the reporting of poll results. Public Opinion Quarterly, 55 , 347-356.

Looks at methodology changes that affected reports of results in the Harris poll on American Leisure.

Juster, F. & Stanford, F. (1991). Comment on work and leisure: On reporting of poll results. Public Opinion Quarterly, 55 , 357-359.

Rebuttal of the Hamilton essay, cited above. The rebuttal is based upon statistical interpretation methods used in the cited survey.

Krosnick, J., Narayan, S., & Smith, W. (1996). Satisficing in Surveys: Initial Evidence. New Directions in Evaluation: Advances in Survey Research , 70, 29-44.

Authors discuss "satisficing," a cognitive approach to survey response, which they believe helps researchers understand how survey respondents arrive at their answers.

Lindsey, J.K. (1973). Inferences from sociological survey data: A unified approach . San Francisco: Jossey-Bass.

Examines the statistical analysis of survey data.

Morgan, F. (1990). Judicial standards for survey research: An update and guidelines. Journal of Marketing, 54 , 59-70.

Looks at legal use of survey information as defined and limited in recent cases. Excellent definitions.

Pottick, K. (1990). Testing the underclass concept by surveying attitudes and behavior. Journal of Sociology and Social Welfare, 17, 117-125.

Review of definitional tests constructed to define "underclass."

Rohme, N. (1992). The state of the art of public opinion polling worldwide. Marketing and Research Today, 20, 264-271.

A quick review of the use of polling in several countries, concluding that the use of polling is on the rise worldwide.

Sabatelli, R. (1988). Measurement issues in marital research: A review and critique of contemporary survey instruments. Journal of Marriage and the Family, 55 , 891-915.

Examines issues of methodology.

Schriesheim, C. A.,& Denisi, A. S. (1980). Item Presentation as an Influence on Questionnaire Validity: A Field Experiment. Educational-and-Psychological-Measurement ; 40 (1), 175-82.

Two types of questionnaire formats measuring leadership variables were examined: one with items measuring the same dimensions grouped together and the second with items measuring the same dimensions distributed randomly. The random condition showed superior validity.

Smith, T. (1990). "A critique of the Kinsey Institute/Roper organization national sex knowledge survey." Public Opinion Quarterly, Vol. 55 , 449-457.

Questions validity of the survey based upon question selection and response interpretations. A rejoinder follows, defending the poll.

Smith, Tom W. (1990). "The First Straw? A Study of the Origins of Election Polls," Public Opinion Quarterly, Vol. 54 (Spring: 21-36).

This article offers a look at the early history of American political polling, with special attention to media reactions to the polls. This is an interesting source for anyone interested in the ethical issues surrounding polling and survey.

Sniderman, P. (1986). Reflections on American racism. Journal of Social Issues, 42 , 173-187.

Rebuttal of critique of racism research. Addresses issues of bias and motive attribution.

Stanfield, J. H. II, & Dennis, R. M., eds (1993). Race and Ethnicity in Research Methods . Newbury Park, CA: Sage.

The contributions in this volume examine the array of methods used in quantitative, qualitative, and comparative and historical research to show how research sensitive to ethnic issues can best be conducted.

Stapel, J. (1993). Public opinion polling: Some perspectives in response to 'critical perspectives.' International Journal of Public Opinion Research, 5, 193-194.

Discussion of the moral power of polling results.

Wentland, E. J., & Smith, K. W. (1993). Survey responses: An evaluation of their validity . San Diego: Academic Press.

Reviews and analyzes data from studies that have, through the use of external criteria, assessed the validity of individuals' responses to questions concerning personal characteristics and behavior in a wide variety of areas.

Williams, R. M., Jr. (1989). "The American Soldier: An Assessment, Several Wars Later." Public Opinion Quarterly. Vol. 53 (Summer: 155-174).

One of the classic studies in the history of survey research is reviewed by one of its authors.

Secondary Analysis:

Jolliffe, F.R. (1986). Survey Design and Analysis. Ellis Horwood Limited: Chichester.

Information about survey design as well as secondary analysis of surveys.

Kiecolt, K. J., & Nathan, L. E. (1985). Secondary analysis of survey data . Beverly Hills, CA: Sage.

Discussion of how to use previously collected survey data to answer a new research question.

Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Analysis of available data. In Applied Social Research: Tool for the Human Services (2nd ed., pp. 202-230). Fort Worth, TX: Holt.

Gives some existing sources for statistical data as well as discussing ways in which to use it.

Rubin, A. (1988). Secondary analyses. In R. M. Grinnell, Jr. (Ed.), Social work research and evaluation. (3rd ed., pp. 323-341). Itasca, IL: Peacock.

Chapter discusses inductive and deductive processes in relation to research designs using secondary data. It also discusses methodological issues and presents a case example.

Dale, A., Arber, S., & Procter, M. (1988). Doing Secondary Analysis . London: Unwin Hyman.

A whole book about how to do secondary analysis.

Electronic Surveys:

Carr, H. H. (1991). Is using computer-based questionnaires better than using paper? Journal of Systems Management September, 19, 37.

Reference from Thach.

Dunnington, Richard A. (1993). New methods and technologies in the organizational survey process. American Behavioral Scientist , 36 (4), 512-30.

Asserts that three decades of technological advancements in communications and computer techhnology have transformed, if not revolutionized, organizational survey use and potential.

Goree, C. & Marszalek, J. (1995). Electronic Surveys: Ethical Issues for Researchers. The College Student Affairs Journal , 15 (1), 75-79.

Explores how the use of electronic surveys challenge existing ethical standards of survey research, and how that researchers need to be aware of these new ethical issues.

Hsu, J. (1995). The Development of Electronic Surveys: A Computer Language-Based Method. The Electronic Library , 13 (3), 195-201.

Discusses the need for a markup language method to properly support the creation of survey questionnaires.

Kiesler, S. & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50 , 402-13.

Opperman, M. (1995) E-Mail Surveys--Potentials and Pitfalls. Marketing Research, 7 (3), 29-33.

A discussion of the advantages and disadvantages of using E-Mail surveys.

Sproull, L. S. (1986). Using electronic mail for data collection in organizational research. Academy of Management Journal, 29, 159-69.

Synodinos, N. E., & Brennan, J. M. (1988). Computer interactive interviewing in survey research. Psychology & Marketing, 5 (2), 117-137.

Thach, Liz. (1995). Using electronic mail to conduct survey research. Educational Technology, 35, 27-31.

A review of the literature on the topic of survey research via electronic mail concentrating on the key issues in design, implementation, and response using this medium.

Walsh, J. P., Kiesler, S., Sproull, L. S., & Hesse, B. W. (1992). Self-selected and randomly selected respondents in a computer network survey. Public Opinion Quarterly, 56, 241-244.

Further Investigation

Bery, David N., & Smith , Kenwyn K. (eds.) (1988). The Self in Social Inquiry: Researching Methods. Sage Publications: Newbury Park.

Has some ethical issues about the role of researcher in social science research.

Barribeau, Paul, Bonnie Butler, Jeff Corney, Megan Doney, Jennifer Gault, Jane Gordon, Randy Fetzer, Allyson Klein, Cathy Ackerson Rogers, Irene F. Stein, Carroll Steiner, Heather Urschel, Theresa Waggoner, & Mike Palmquist. (2005). Survey Research. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=68

how to create a survey for research

Summer is here, and so is the sale. Get a yearly plan with up to 65% off today! 🌴🌞

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

how to create a survey for research

HubSpot CRM

how to create a survey for research

Google Sheets

how to create a survey for research

Google Analytics

how to create a survey for research

Microsoft Excel

how to create a survey for research

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

how to create a survey for research

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • How to create a good research survey (step-by-step guide)

How to create a good research survey (step-by-step guide)

Surveys have been one of the most used research methods in today's world. It has been possible to reach the participants more easily within the possibilities of technology. In addition, the processing of the data collected from the surveys has been simplified accordingly.

Although it is easy to prepare a research survey, it is helpful to keep some crucial points in mind. In this article, the points that will be useful for you will be presented one by one. If you are about to prepare a survey for research , such as market research or academic research , this article will provide useful tips and examples for you to get started.

  • What is a research survey?

Research surveys are a type of survey that is created to collect data in market, academic, social, political, or psychological research and delivered to the participants as prepared online, on paper, or on the phone. But the most effective among them is, of course, online surveys.

  • Market research : Surveys used by businesses and organizations to analyze customer experience and behavior. They are known as market research surveys .
  • Academic research : Surveys prepared for use in scientific research require people's participation.
  • Social research : Surveys used by researchers to learn about social groups so that they can design products/services that serve various needs of the people.
  • Political research : Surveys measuring public opinion about policies and parties
  • Psychological research : Surveys researching character features, preferences, and behaviors of people.

how to create a survey for research

  • How do I create my own survey for research?

Many businesses, organizations, and researchers need the quantitative data provided by surveys. That's why they consult online surveys, which is one of the most common and inexpensive ways today and creating online surveys has become a method that everyone can reach with the increasing number of survey sites. But there are two main types of surveys, including offline ways:

  • A research questionnaire created by the researcher asks participants a set of questions, and it can be distributed by mail, online, or in person.
  • A research interview , created by the researcher asking participants a set of questions by phone or in person and records the answers.

1 - Decide the aim of the research and make survey questions

Before preparing a survey, you should prepare the ground for its thought. Prepare your research topic and questions in such a way that you can create a database when the survey results are measured. For example, suppose you are doing market research . In that case, it is more beneficial for you to ask questions such as asking your customers' requests and problems or extracting data about the supply-demand ratio of a good.

2 - Determine the target audience

In fact, this item is intertwined with the first item. While determining the method and scope of your survey, you are also determining your target audience. For example, if you intend to conduct a survey for older people with low digital literacy, perhaps a face-to-face survey may provide you with better results.

3 - Create your research survey

In recent years, surveys have increased considerably, especially in the field of market research. As such, businesses, organizations, and researchers started to make business deals with survey applications and sites as their budgets allowed. And forms.app is one of these survey creation sites, but forms.app stands out with over 350 templates , a calculator that you can use in quizzes and quote forms, and various customization and edit options.

1  - Choose a survey tool that offers you free online form creation, such as forms.app.

2  - You can begin your survey with quick steps by finding survey templates related to your research or start from scratch.

3  - You can write the questions you have prepared on the form fields from the edit page .

4  - You can customize your form as you wish on the design page .

5  - And lastly, share your form on any platform or any website.

4 - Preview and test your survey

Just as you should check every step of your research, don't forget to check your survey after you create it. For example, in forms.app, you can click on the eye icon at the top right of the edit page to see how your survey will look when it is published. It shows you how your form will look on desktop, tablet, and mobile separately. Later, you can open the form by clicking open . After you fill out your form to try it, you can see the results in the Results tab . If you are worried about mistakes in your survey, you can check our article about common mistakes to avoid .

how to create a survey for research

  • 5 tips for a successful survey research

Doing research is a long and arduous task. Therefore, it is inevitable that there will be some parts of your research that you may have missed and that you need to check again and again. With these tips, you will have the keys to how to conduct survey research.

1  - Be clear when preparing your questions : If you want the respondent to understand and answer your questions, you should prepare your questions accordingly. That's why clarity and restraint should be your first priority. Your question may be long or short, but if it has an ambiguous meaning, the respondent may give you an answer other than the one you hoped for.

2  - Create an area on your form where people can discuss your research : Although we rely on the data provided by surveys, sometimes finding the answer to an unasked question can be illuminating in your research. What is meant by the answer to the unasked questions is the answers that the participant has in mind and does not find appropriate to write in other form fields. Adding an empty form field for the participant to express, criticize or discuss can provide unexpected positive feedback to your research.

3  - Prepare your survey questions respondents are willing to respond truthfully : In many research subjects, there may be questions that participants hesitate to answer. These may be political, religious, ethnic, sexual, or class questions. In such cases, stating that you will protect the confidentiality of the respondents or that you will not share the data with 3rd parties may remove the question marks and concerns in the minds of the respondents.

4  - Be specific in how you ask your questions : General questions are very hard to answer, and the results obtained do not satisfy the researcher very much, so make your questions very specific when asking them.

5  - Analyze the survey results using online survey tools : One of the practical aspects of online survey tools is that you can easily store the responses so that you can analyze them later. You can also edit your survey by looking at the submission percentage of your survey. If you have a low submission percentage, it may mean that there is a problem with the questions you ask or the audience you choose.

  • How to share your survey with your target audience

You can share a survey on the phone, online, or face-to-face. Since the primary purpose of this article is to show you the benefits of online surveys and how to create and share them, you can solve this problem with a free form creation site like forms.app. 

  • On the share page of your form, you can customize privacy settings before sharing it. 
  • Then you can directly share it on Whatsapp, Facebook, Twitter, Mail, or WordPress by clicking on the relevant icons .
  • There is also an option for copying the link , and you can embed your form as iFrame .
  • Or you can share it by creating a QR code .

This article showed how to prepare a survey for a research paper, and the best ways to conduct a survey were shown step by step. It aims to show you how to save time and money with online surveys in your long and grueling research journey. It is up to you to follow these steps and tips and make your research stand out from other research projects.

Atakan is a content writer at forms.app. He likes to research various fields like history, sociology, and psychology. He knows English and Korean. His expertise lies in data analysis, data types, and methods.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

How to create a workshop registration form (+ Free templates)

How to create a workshop registration form (+ Free templates)

Behçet Beyazçiçek

How to create online forms with payment integration

How to create online forms with payment integration

8 reasons why you need forms automation software in your company

8 reasons why you need forms automation software in your company

forms.app Team

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO

how to create a survey for research

Academic Experience

How to write great survey questions (with examples)

Learning how to write survey questions is both art and science. The wording you choose can make the difference between accurate, useful data and just the opposite. Fortunately, we’ve got a raft of tips to help.

Figuring out how to make a good survey that yields actionable insights is all about sweating the details. And writing effective questionnaire questions is the first step.

Essential for success is understanding the different types of survey questions and how they work. Each format needs a slightly different approach to question-writing.

In this article, we’ll share how to write survey questionnaires and list some common errors to avoid so you can improve your surveys and the data they provide.

Free eBook: The Qualtrics survey template guide

Survey question types

Did you know that Qualtrics provides 23 question types you can use in your surveys ? Some are very popular and used frequently by a wide range of people from students to market researchers, while others are more specialist and used to explore complex topics. Here’s an introduction to some basic survey question formats, and how to write them well.

Multiple choice

Familiar to many, multiple choice questions ask a respondent to pick from a range of options. You can set up the question so that only one selection is possible, or allow more than one to be ticked.

When writing a multiple choice question…

  • Be clear about whether the survey taker should choose one (“pick only one”) or several (“select all that apply”).
  • Think carefully about the options you provide, since these will shape your results data.
  • The phrase “of the following” can be helpful for setting expectations. For example, if you ask “What is your favorite meal” and provide the options “hamburger and fries”, “spaghetti and meatballs”, there’s a good chance your respondent’s true favorite won’t be included. If you add “of the following” the question makes more sense.

Asking participants to rank things in order, whether it’s order of preference, frequency or perceived value, is done using a rank structure. There can be a variety of interfaces, including drag-and-drop, radio buttons, text boxes and more.

When writing a rank order question…

  • Explain how the interface works and what the respondent should do to indicate their choice. For example “drag and drop the items in this list to show your order of preference.”
  • Be clear about which end of the scale is which. For example, “With the best at the top, rank these items from best to worst”
  • Be as specific as you can about how the respondent should consider the options and how to rank them. For example, “thinking about the last 3 months’ viewing, rank these TV streaming services in order of quality, starting with the best”

Slider structures ask the respondent to move a pointer or button along a scale, usually a numerical one, to indicate their answers.

When writing a slider question…

  • Consider whether the question format will be intuitive to your respondents, and whether you should add help text such as “click/tap and drag on the bar to select your answer”
  • Qualtrics includes the option for an open field where your respondent can type their answer instead of using a slider. If you offer this, make sure to reference it in the survey question so the respondent understands its purpose.

Also known as an open field question, this format allows survey-takers to answer in their own words by typing into the comments box.

When writing a text entry question…

  • Use open-ended question structures like “How do you feel about…” “If you said x, why?” or “What makes a good x?”
  • Open-ended questions take more effort to answer, so use these types of questions sparingly.
  • Be as clear and specific as possible in how you frame the question. Give them as much context as you can to help make answering easier. For example, rather than “How is our customer service?”, write “Thinking about your experience with us today, in what areas could we do better?”

Matrix table

Matrix structures allow you to address several topics using the same rating system, for example a Likert scale (Very satisfied / satisfied / neither satisfied nor dissatisfied / dissatisfied / very dissatisfied).

When writing a matrix table question…

  • Make sure the topics are clearly differentiated from each other, so that participants don’t get confused by similar questions placed side by side and answer the wrong one.
  • Keep text brief and focused. A matrix includes a lot of information already, so make it easier for your survey-taker by using plain language and short, clear phrases in your matrix text.
  • Add detail to the introductory static text if necessary to help keep the labels short. For example, if your introductory text says “In the Philadelphia store, how satisfied were you with the…” you can make the topic labels very brief, for example “staff friendliness” “signage” “price labeling” etc.

Now that you know your rating scales from your open fields, here are the 7 most common mistakes to avoid when you write questions. We’ve also added plenty of survey question examples to help illustrate the points.

Likert Scale Questions

Likert scales are commonly used in market research when dealing with single topic survyes. They're simple and most reliable when combatting survey bias . For each question or statement, subjects choose from a range of possible responses. The responses, for example, typically include:

  • Strongly agree
  • Strongly disagree

7 survey question examples to avoid.

There are countless great examples of writing survey questions but how do you know if your types of survey questions will perform well? We've highlighted the 7 most common mistakes when attempting to get customer feedback with online surveys.

Survey question mistake #1: Failing to avoid leading words / questions

Subtle wording differences can produce great differences in results. For example, non-specific words and ideas can cause a certain level of confusing ambiguity in your survey. “Could,” “should,” and “might” all sound about the same, but may produce a 20% difference in agreement to a question.

In addition, strong words such as “force” and “prohibit” represent control or action and can bias your results.

Example: The government should force you to pay higher taxes.

No one likes to be forced, and no one likes higher taxes. This agreement scale question makes it sound doubly bad to raise taxes. When survey questions read more like normative statements than questions looking for objective feedback, any ability to measure that feedback becomes difficult.

Wording alternatives can be developed. How about simple statements such as: The government should increase taxes, or the government needs to increase taxes.

Example: How would you rate the career of legendary outfielder Joe Dimaggio?

This survey question tells you Joe Dimaggio is a legendary outfielder. This type of wording can bias respondents.

How about replacing the word “legendary” with “baseball” as in: How would you rate the career of baseball outfielder Joe Dimaggio? A rating scale question like this gets more accurate answers from the start.

Survey question mistake #2: Failing to give mutually exclusive choices

Multiple choice response options should be mutually exclusive so that respondents can make clear choices. Don’t create ambiguity for respondents.

Review your survey and identify ways respondents could get stuck with either too many or no single, correct answers to choose from.

Example: What is your age group?

What answer would you select if you were 10, 20, or 30? Survey questions like this will frustrate a respondent and invalidate your results.

Example: What type of vehicle do you own?

This question has the same problem. What if the respondent owns a truck, hybrid, convertible, cross-over, motorcycle, or no vehicle at all?

Survey question mistake #3: Not asking direct questions

Questions that are vague and do not communicate your intent can limit the usefulness of your results. Make sure respondents know what you’re asking.

Example: What suggestions do you have for improving Tom’s Tomato Juice?

This question may be intended to obtain suggestions about improving taste, but respondents will offer suggestions about texture, the type of can or bottle, about mixing juices, or even suggestions relating to using tomato juice as a mixer or in recipes.

Example: What do you like to do for fun?

Finding out that respondents like to play Scrabble isn’t what the researcher is looking for, but it may be the response received. It is unclear that the researcher is asking about movies vs. other forms of paid entertainment. A respondent could take this question in many directions.

Survey question mistake #4: Forgetting to add a “prefer not to answer” option

Sometimes respondents may not want you to collect certain types of information or may not want to provide you with the types of information requested.

Questions about income, occupation, personal health, finances, family life, personal hygiene, and personal, political, or religious beliefs can be too intrusive and be rejected by the respondent.

Privacy is an important issue to most people. Incentives and assurances of confidentiality can make it easier to obtain private information.

While current research does not support that PNA (Prefer Not to Answer) options increase data quality or response rates, many respondents appreciate this non-disclosure option.

Furthermore, different cultural groups may respond differently. One recent study found that while U.S. respondents skip sensitive questions, Asian respondents often discontinue the survey entirely.

  • What is your race?
  • What is your age?
  • Did you vote in the last election?
  • What are your religious beliefs?
  • What are your political beliefs?
  • What is your annual household income?

These types of questions should be asked only when absolutely necessary. In addition, they should always include an option to not answer. (e.g. “Prefer Not to Answer”).

Survey question mistake #5: Failing to cover all possible answer choices

Do you have all of the options covered? If you are unsure, conduct a pretest version of your survey using “Other (please specify)” as an option.

If more than 10% of respondents (in a pretest or otherwise) select “other,” you are probably missing an answer. Review the “Other” text your test respondents have provided and add the most frequently mentioned new options to the list.

Example: You indicated that you eat at Joe's fast food once every 3 months. Why don't you eat at Joe's more often?

There isn't a location near my house

I don't like the taste of the food

Never heard of it

This question doesn’t include other options, such as healthiness of the food, price/value or some “other” reason. Over 10% of respondents would probably have a problem answering this question.

Survey question mistake #6: Not using unbalanced scales carefully

Unbalanced scales may be appropriate for some situations and promote bias in others.

For instance, a hospital might use an Excellent - Very Good - Good - Fair scale where “Fair” is the lowest customer satisfaction point because they believe “Fair” is absolutely unacceptable and requires correction.

The key is to correctly interpret your analysis of the scale. If “Fair” is the lowest point on a scale, then a result slightly better than fair is probably not a good one.

Additionally, scale points should represent equi-distant points on a scale. That is, they should have the same equal conceptual distance from one point to the next.

For example, researchers have shown the points to be nearly equi-distant on the strongly disagree–disagree–neutral–agree–strongly agree scale.

Set your bottom point as the worst possible situation and top point as the best possible, then evenly spread the labels for your scale points in-between.

Example: What is your opinion of Crazy Justin's auto-repair?

Pretty good

The Best Ever

This question puts the center of the scale at fantastic, and the lowest possible rating as “Pretty Good.” This question is not capable of collecting true opinions of respondents.

Survey question mistake #7: Not asking only one question at a time

There is often a temptation to ask multiple questions at once. This can cause problems for respondents and influence their responses.

Review each question and make sure it asks only one clear question.

Example: What is the fastest and most economical internet service for you?

This is really asking two questions. The fastest is often not the most economical.

Example: How likely are you to go out for dinner and a movie this weekend?

Dinner and Movie

Dinner Only

Even though “dinner and a movie” is a common term, this is two questions as well. It is best to separate activities into different questions or give respondents these options:

5 more tips on how to write a survey

Here are 5 easy ways to help ensure your survey results are unbiased and actionable.

1. Use the Funnel Technique

Structure your questionnaire using the “funnel” technique. Start with broad, general interest questions that are easy for the respondent to answer. These questions serve to warm up the respondent and get them involved in the survey before giving them a challenge. The most difficult questions are placed in the middle – those that take time to think about and those that are of less general interest. At the end, we again place general questions that are easier to answer and of broad interest and application. Typically, these last questions include demographic and other classification questions.

2. Use “Ringer” questions

In social settings, are you more introverted or more extroverted?

That was a ringer question and its purpose was to recapture your attention if you happened to lose focus earlier in this article.

Questionnaires often include “ringer” or “throw away” questions to increase interest and willingness to respond to a survey. These questions are about hot topics of the day and often have little to do with the survey. While these questions will definitely spice up a boring survey, they require valuable space that could be devoted to the main topic of interest. Use this type of question sparingly.

3. Keep your questionnaire short

Questionnaires should be kept short and to the point. Most long surveys are not completed, and the ones that are completed are often answered hastily. A quick look at a survey containing page after page of boring questions produces a response of, “there is no way I’m going to complete this thing”. If a questionnaire is long, the person must either be very interested in the topic, an employee, or paid for their time. Web surveys have some advantages because the respondent often can't view all of the survey questions at once. However, if your survey's navigation sends them page after page of questions, your response rate will drop off dramatically.

How long is too long?  The sweet spot is to keep the survey to less than five minutes. This translates into about 15 questions. The average respondent is able to complete about 3 multiple choice questions per minute. An open-ended text response question counts for about three multiple choice questions depending, of course, on the difficulty of the question. While only a rule of thumb, this formula will accurately predict the limits of your survey.

4. Watch your writing style

The best survey questions are always easy to read and understand. As a rule of thumb, the level of sophistication in your survey writing should be at the 9th to 11th grade level. Don’t use big words. Use simple sentences and simple choices for the answers. Simplicity is always best.

5. Use randomization

We know that being the first on the list in elections increases the chance of being elected. Similar bias occurs in all questionnaires when the same answer appears at the top of the list for each respondent. Randomization corrects this bias by randomly rotating the order of the multiple choice matrix questions for each respondent.

While not totally inclusive, these seven survey question tips are common offenders in building good survey questions. And the five tips above should steer you in the right direction.

Focus on creating clear questions and having an understandable, appropriate, and complete set of answer choices. Great questions and great answer choices lead to great research success. To learn more about survey question design, download our eBook, The Qualtrics survey template guide or get started with a free survey account with our world-class survey software .

Sarah Fisher

Related Articles

February 8, 2023

Smoothing the transition from school to work with work-based learning

December 6, 2022

How customer experience helps bring Open Universities Australia’s brand promise to life

August 9, 2022

3 things that will improve your teachers’ school experience

August 2, 2022

Why a sense of belonging at school matters for K-12 students

July 14, 2022

Improve the student experience with simplified course evaluations

March 17, 2022

Understanding what’s important to college students

February 18, 2022

Malala: ‘Education transforms lives, communities, and countries’

July 8, 2020

5 challenges in getting back to school (and 5 ways to tackle them)

Stay up to date with the latest xm thought leadership, tips and news., request demo.

Ready to learn more about Qualtrics?

SurveyCTO

How to write survey questions for research – with examples

You are currently viewing How to write survey questions for research – with examples

  • Post author: Marta Costa
  • Post published: April 5, 2023
  • Post category: Data Collection & Data Quality

A good survey can make or break your research. Learn how to write strong survey questions, learn what not to do, and see a range of practical examples.

The accuracy and relevance of the data you collect depend largely on the quality of your survey questions . In other words, good questions make for good research outcomes.  It makes sense then, that you should put considerable thought and planning into writing your survey or questionnaire.

In this article, we’ll go through what a good survey question looks like, talk about the different kinds of survey questions that exist, give you some tips for writing a good survey question, and finally, we’ll take a look at some examples. 

What is a good survey question?

A good survey question should contain simple and clear language. It should elicit responses that are accurate and that help you learn more about your target audience and their experiences. It should also fit in with the overall design of your survey project and connect with your research objective. There are many different types of survey questions. Let’s take a look at some of them now. 

New to survey data collection? Explore SurveyCTO for free with a 15-day trial.

Types of survey questions

Different types of questions are used for different purposes. Often questionnaires or surveys will combine several types of questions. The types you choose will depend on the overall design of your survey and your aims.  Here is a list of the most popular kinds of survey questions:  

Example of an open-ended question which reads Please list the names and ages of members of your household in the text box below

These questions can’t be answered with a simple yes or no. They require the respondent to use more descriptive language to share their thoughts and answer the question. These types of questions result in qualitative data.

Closed-ended

A closed-ended question is the opposite of an open-ended question. Here the respondent’s answers are normally restricted to a yes or no, true or false, or multiple-choice answer. This results in quantitative data.

how to create a survey for research

Dichotomous

This is a type of closed-ended question. The defining characteristic of these questions is that they have two opposing fields. For example, a question that can only be answered with a yes/no answer is a dichotomous question. 

how to create a survey for research

Multiple choice

how to create a survey for research

These are another type of closed-ended question. Here you give the respondent several possible ways, or options, in which they can respond. It’s also common to have an “other” section with a text box where the respondent can provide an unlisted answer.

Rating scale

This is again another type of close-ended question. Here you would normally present two extremes and the respondent has to choose between these extremes or an option placed along the scale.

Likert scale

A Likert scale is a form of a rating scale. These are generally used to measure attitudes towards something by asking the respondent to agree or disagree with a statement. They are commonly used to measure satisfaction. 

how to create a survey for research

Ranking scale 

Here the respondents are given a few options and they need to order these different options in terms of importance, relevance, or according to the instructions.  

Demographic questions

These are often personal questions that allow you to better understand your respondents and their backgrounds. They normally cover questions related to age, race, marital status, education level, etc.

Public transport vehicles with colorful roofs in Kampala, Uganda

Ready to start creating your surveys? Sign up for a free 15-day trial.

7 Tips for writing a good survey question

The following 7 tips will help you to write a good survey question: 

1. Use clear, simple language

Your survey questions must be easy to understand. When they’re straight to the point, it’s more likely that your respondent will understand what you are asking of them and be able to respond accurately, giving you the data you need. 

2. Keep your questions (and answers) concise

When sentences or questions are convoluted or confusing, respondents might misunderstand the question. If your questions are too long, they may also get bored by the questions. And in your lists of answers for multiple choice questions, make sure your choice lists are concise as well.  If your questions are too long, or if you’ve provided too many options, you may receive responses that are inaccurate or that are not a true representation of how the respondent feels. To limit the number of options a respondent sees, you can use a survey platform like SurveyCTO to filter choice lists and make it easy for respondents to answer quickly. If you have an exceptionally long list of possible responses, like countries, implement search functionality in your list of choices so your respondents can quickly search for their selection.

3. Don’t add bias to your question

You should avoid leading your respondent in any particular direction with your questions, you want their response to be 100% their thoughts without being unduly influenced.  An example of a question that could lead the respondent in a particular direction would be:  How happy are you to live in this amazing area?  By adding the adjective amazing before area, you are putting the idea in the respondent’s head that the area is amazing. This could cloud their judgment and influence the way they answer the question. The word happy together with amazing may also be problematic. A better, less loaded way to ask this question might be something like this:  How satisfied are you living in this area?

4. Ask one question at a time

Asking multiple things in one question is confusing and will lead to inaccuracies in the answer. When you write your question you should know exactly what you want to achieve. This will help you to avoid combining two questions in one. Here is an example of a double-barrelled question that would be difficult for a respondent to answer: Please answer yes or no to the following question: Do you drive to work and do you carry any passengers? In this question, the respondent is being asked two things, yet they only have the opportunity to respond to one. Even then, they don’t know which one they should respond to. Avoid this kind of questioning to get clearer, more accurate data.

5. Account for all possible answer choices

You should give your respondent the ability to answer a question accurately. For instance, if you are asking a demographic question you’ll need to provide options that accurately reflect their experience. Below, you can see there is an “other” option with space where the respondent can answer how they see fit, in the case that they don’t fit into any of the other options. Which gender do you most identify with:

  • Prefer not to say
  • Other [specify]

6. Plan the question flow and choose your questions carefully

Question writing goes hand-in-hand with questionnaire design. So, when writing survey questions, you should consider the survey as a whole. For example, if you write a close-ended question like:  Were you satisfied with the customer service you received when you bought x product? You might want to follow it up with an open-ended question such as:   Please explain the reason for your answer: This will help you draw out more information from your respondent that can help you assess the strengths and weaknesses of your customer service team.  Making sure your questions flow in a logical order is also important. 

For instance, if you ask a question regarding the total cost of a person’s childcare arrangements, but you’re unaware if they have children, you should first ask if they have children and how many.  It’s also a good idea to start your survey with short, easy-to-answer, non-sensitive questions before moving on to something more complex. This way there is more chance you’ll engage your audience early on and make it more likely that they’ll continue with the survey. You should also consider whether you need qualitative or quantitative data for your research outcomes or a mix of the two. This will help you decide the balance of closed-ended and open-ended questions you use.   With close-ended questions, you get quantitative data. This data will be fairly conclusive and simple to analyze. It can be useful when you need to measure specific variables or metrics like population sizes, education levels, literacy levels, etc. 

An enumerator conducting a phone interview using a tablet connected with headsets. The tablet is on a table

On the other hand, qualitative data gained by open-ended questions can be full of insights. However, these questions can be more laborious for the respondent to complete making it more likely for them to skip through or give a token answer. They’re also more complex to analyze.

7. Test your surveys

Before a questionnaire goes anywhere near a respondent, it needs to be checked over. Mistakes in your survey questions can give inaccurate results. They can also waste time and resources.  Having an impartial person check your questions can also help prevent bias. So, not only should you check your work, but you should also share it with colleagues for them to check.  After checking your survey questions, make sure to check the functionality and flow of your survey. If you’re building your form in SurveyCTO, you can use our form testing interface to catch errors, make quick fixes, and test your workflows with real data.

IFPRI agricultural field project with people seating in pairs under some trees during survey interviews

Examples of good survey questions

Now that we’ve gone through some dos and don’ts for writing survey questions, we can move on to more practical examples of how a good survey question should look. To keep these specific to the research world we’ll look at three categories of questions. 

  • Household survey questions 
  • Monitoring and evaluation survey questions 
  • Impact evaluation survey questions

1. Household Survey Questions

2. monitoring and evaluation survey questions , 3. impact evaluation questions .

Skip-logic-and-choice-filters

Strong survey questions lead to better research outcomes

Writing good survey questions is essential if you want to achieve your research aims.  A good survey question should be clear, concise, and contain simple language. They should be free of bias and not lead the respondent in any direction. Your survey questions need to complement each other, engage your audience and connect back to the overall objectives of your research.  Creating survey questions and survey designs is a large part of your research, however, is just a part of the puzzle. When your questions are ready, you’ll need to conduct your survey and then find a way to manage your data and workflow. Take a look at this post to see more ways SurveyCTO can help you beyond writing your research survey questions. 

Your next steps: Explore more resources

To keep reading about how SurveyCTO can help you design better surveys, take a look at these resources:  

  • Sign up here to get notified about our monthly webinars, where organizations like IDinsight  share best practices for effective surveys.
  • Check out previous webinars from SurveyCTO about survey forms, like this one on high-frequency checks for monitoring surveys. 
  • Sign up for a free trial of SurveyCTO for your next survey project.

To see how SurveyCTO can help you with your survey needs, start a free 15-day trial today. No credit card required. 

Post author avatar

Marta Costa

Senior Product Specialist

Marta is a member of the Customer Success team for Dobility. She helps users working at NGOs, nonprofits, survey firms, universities and research institutes achieve their objectives using SurveyCTO, and works on new ways to help users get the most out of the platform.

Marta has worked in international development consultancy and research, supporting and coordinating impact evaluations, monitoring and evaluation projects, and data collection processes at the national level in areas such as education, energy access, and financial inclusion.

You Might Also Like

Read more about the article Learn how to save time and enhance data quality by pre-loading data

Learn how to save time and enhance data quality by pre-loading data

Read more about the article SurveyCTO Overview: Collect your best data

SurveyCTO Overview: Collect your best data

Read more about the article Form testing interface: Better surveys in less time

Form testing interface: Better surveys in less time

  • How it works

researchprospect post subheader

How to Conduct Surveys – Guide with Examples

Published by Alvin Nicolas at August 16th, 2021 , Revised On August 29, 2023

Surveys are a popular primary data collection method and can be used in various  types of research . A researcher formulates a survey that includes questions relevant to the research topic. The participants are selected, and the questionnaire is distributed among them, either online or offline. It consists of either open or close-ended questions.

Objectives and Uses of Survey 

  • Surveys are conducted for the planning of national, regional, or local programs.
  • They help to study the perceptions of the community related to the topic.
  • Surveys are used in market research, social sciences, and commercial settings.
  • They can also be used for various other disciplines, from business to anthropology.
  • Surveys are frequently used in quantitative research .

Guidelines for Conducting a Survey

Before conducting a survey, you should follow these steps:

  • Construct a clear and concise research problem statement  focusing on what is being investigated and why the research is carried out.
  • Formulate clear and unbiased questions for the survey.
  • Test the questions randomly on volunteer groups and make necessary changes f required.
  • Determine the mode of survey distribution.
  • Schedule the timing of the survey.
  • Use a professional tone, a scholarly approach, and an academic format for your survey.
  • Ensure the privacy and anonymity of the participants.
  • Avoid offensive languages or biased questions.
  • Take the opinion of the participants.
  • Inform the participants about the survey.
  • Calculate the time required for gathering data, analysing, and reporting it.

How to Conduct a Survey?

Following are the steps while conducting the surveys.

  • Set the aims of your research
  • Select the type of survey
  • Prepare a list of questions
  • Invite the participants
  • Record the responses of the participants
  • Distribute the survey questions
  • Analyse the results
  • Write your report

Step 1: Set the Aims of your Research

Before conducting research, you need to form a clear picture of the outcomes of your study.  Create a research question  and devise the goals of your research. Based on the requirements of your research, you need to select the participants. It would help if you decided whether your survey would be online or offline.

You need to select a specific group of participants for your research. The participants can be:

  • A group of college students
  • Hospital staff
  • A group of people in public places
  • Customers or employs a specific company
  • A group of people based on their age, gender, and profession, etc.

Sometimes it’s impossible to survey the entire population individually if it’s a large population. It requires a lot of time and effort. In such cases, you can select a group of people from the selected community, and it’s called the  sample.

  • 50 customers of a company
  • 40 students of class 12
  • 30 boys and 30 girls of age 14-15

You can also use an online survey if your target population is large. It helps in getting the maximum number of responses within a short time.

Useful reading: What is correlational research , a comprehensive guide for researchers.

Does your Research Methodology Have the Following?

  • Great Research/Sources
  • Perfect Language
  • Accurate Sources

If not, we can help. Our panel of experts makes sure to keep the 3 pillars of Research Methodology strong.

Does your Research Methodology Have the Following

Step 2: Select the Type of Survey

Type of survey Definition Pros Cons
Questionnaire This survey is used in descriptive research in which information is collected by distributing a written questionnaire among the participants online, in person, mail. The participants are asked to fill out the questions. Inexpensive Time-saving Easy to conduct Participants may not answer honestly. Participants can leave the questionnaire incomplete.
Interviews The participants are asked questions in person or on the phone, and the researcher records the responses. In-depth responses Flexible and adaptable Participant’s gestures and expressions are visible if the interview is in person. Time-consuming  It does not work if the number of participants is large.
Online/Web/Electronic Survey Computers, laptops and mobile phones play a significant role in this kind of survey. A set of questions is sent through email and texts to a selected target sample, and participants respond to the questions. Easy to conduct Requires less time compared to interviews You can target the participants globally. Responses can be incorrect or dishonest. Computer knowledge is required to participate in such kinds of surveys.
Rating Scales It includes closed-ended questions to record the participants’ responses on a specific service/product or topic. Participants provide their feedback by choosing the rating scale as per their experience, whether it’s very good, average, or below average. You might have seen such kind of rating scales on many shopping sites. Easy to conduct Requires less time You can target the participants globally. Responses can be incorrect or dishonest. Computer knowledge is required to participate in such kind of surveys.
Checklists It includes a series of statements to evaluate the performance of an individual, organisation, or service. Participants need to tick the statements according to their observations and experiences. Cost-effective Responses may not be reliable.

Example of the Rating Scale:

enjoy reading paper books more than reading e-books

How do you feel about your ability to find a career option according to your goals?

Step 3: Prepare a List of Questions

You can use various types of questions in your survey, such as open-ended, closed-ended, and multiple-choice questions. Most of the participants like short multiple-choice questions. Use simple and clear language to avoid misunderstanding. Avoid offensive language. 

If you are using checklists in your survey to get feedback on a specific feature, service, or product, then write the statements based on your evaluation aims.

Closed-ended Questions

  • Questions with answers such as (yes/no, agree/disagree, true/ false)
  • Rating scales with points or stars to measure the satisfaction of the people.
  • A list of questions with multiple options with either a single answer option or various answers.

Open-ended Questions

Open-ended  questions require the participants’ individual answers according to their opinion, experience, and choice. The answers can be either one word or in sentences.

  • Tell me about your relationship with your boss?
  • Why did you choose this answer?
  • What’s your opinion on women’s education?
  • How do you see the future?
  • What is a success, according to you?

Step 4: Invite the Participants

You can try out many ways to invite the participants to your survey. You can inform them through emails, texts. You can post your survey on social media or design a banner to display on websites to grab the respondents’ attention.

Step 5: Record the Responses of the Participants

One of the essential steps is to gather responses from the participants. In most cases, people don’t pay attention to the survey questions or leave them incomplete. You can offer some rewards to increase the response rates of your participants. You can also promise to share the outcomes with your participants to improve their response rate.

Step 6: Distribute the Survey Questions

You need to decide the sample size (number of participants and responses required) according to your research requirements. It will help if you determine whether you are going to conduct an online survey or offline. 

Step 7: Analyse the Results

You can store the data in tabulated forms, charts, graphs, or you can take out a print of the data in the form of a spreadsheet. You can use text analysis to analyse the findings of your questionnaire survey.  You can perform a thematic analysis  for the  interview  surveys. However, the information on the online surveys is stored automatically, and you can analyse it directly.

Step 8: Write your Report

The final step is to write a report for your survey. You need to ensure that you have met the objectives of your research or not. 

In the  introduction , you need to explain your survey’s whole procedure by mentioning the time and place of the survey conducted. Mention the methods of analysis you used in your survey.

A successful survey represents reliable feedback to the survey questions as evidence of your research. If you have online surveys, the responses will help you measure the participant’s satisfaction and positive or negative opinions.

In the section of discussion and conclusion, you can  explain your findings  by using supporting evidence and concluding the results by answering your research questions.

Frequently Asked Questions

What are the basic steps to conduct the survey.

Basic steps to conduct a survey:

  • Define objectives and target audience.
  • Develop clear and concise questions.
  • Choose survey method (online, phone, etc.).
  • Pilot test to refine questions.
  • Distribute to participants.
  • Collect and analyze responses.
  • Draw conclusions and share findings.

You May Also Like

Disadvantages of primary research – It can be expensive, time-consuming and take a long time to complete if it involves face-to-face contact with customers.

Thematic analysis is commonly used for qualitative data. Researchers give preference to thematic analysis when analysing audio or video transcripts.

Action research for my dissertation?, A brief overview of action research as a responsive, action-oriented, participative and reflective research technique.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

How to create a survey on Google Forms and share it with others to collect their responses

  • A Google Forms survey is a free tool that can be used for fun or professional research purposes.
  • You can customize Google Forms questions and answers in many different ways.
  • After you've created a survey using Google Forms, you can share it with others to collect their responses.

Google Forms is a free and useful tool that enables you to create surveys for others to complete. Since you can customize questions and answer options, it's helpful for conducting research but can also be just for fun. Once you've created your survey, you can quickly share it through an email or link, or embed it into your website. 

How to create a Google Forms survey

1. Go to forms.google.com and log in to your Google account if prompted to do so.

2. Click on the form labeled Blank — it's represented by a plus symbol.

3. Click the field called Untitled form and type the title you want to use for the survey. You can also write a description of the survey by writing in the field directly beneath it, called Form description .

4. Once you title your survey, you can add questions by clicking on the icons on the right side of the screen. Here's a quick rundown of the icons and what they do.

  • Circle with plus sign: Add a new question.
  • Paper with arrow: Import questions from another source (such as another Google Form you created).
  • A large and small T: Add a text box that includes a title and description without a question, such as providing additional context.
  • A square with two small triangles: Add a new question that includes an image.
  • A rectangle with one triangle inside: Add a new question that includes a video.
  • Two parallel rectangles: Add a new section to the survey to differentiate it from other sections.

5. Depending on the type of question you're asking, respondents may answer in different ways, such as multiple choice or short answer. You can change the type of answer by clicking on a dropdown menu inside the question box and then clicking on the preferred answer type. Available answer types include:

  • Short answer: A one-line answer that must be typed in.
  • Paragraph: A paragraph-length answer that must be typed in.
  • Multiple choice: Respondents are given multiple responses, but there is only one correct answer.
  • Checkboxes: Respondents can select multiple answers from a list.
  • Dropdown: The correct answer must be selected from a dropdown menu of options.
  • File upload: The question must be answered by uploading an external file, such as a document or image.
  • Linear scale: The respondent answers by selecting a point on a numeric or qualitative scale, such as a customer service survey.
  • Multiple choice grid: The respondent must mix and match answers from a grid.
  • Checkbox grid: The respondent can select multiple answers from a grid.
  • Date: The answer must be a specific date.
  • Time: The answer must be a specific time.

6. Once you add a type of question, you can write the question itself, as well as individual answer options. To do so, click on the respective field and type the text you want to use.

  • If you want to delete an answer, click on the X icon to the right of that answer.
  • If you want to delete a question, click on the trash can icon at the bottom of the question box.
  • To duplicate a question, click on the Copy icon to the left of the trash icon.
  • You can also click on the slider labeled Required to make responses required or optional.
  • For additional settings, click on the three vertical dots.

7. Repeat the process until you've created all the questions you need for your survey.

How to share your Google Forms survey

Once you finish creating your survey, you can share it with others in a few different ways. Click the Send button in the upper-right corner of the form to get started.

Email the survey to specific recipients

With the Email tab selected (first from the left), click in the field labeled To and type in the email address(es) of your intended survey recipient(s). If you prefer, you can include a message as well. Click Send in the bottom-right corner once you're finished, and the survey will be sent to all of the recipients.

Copy a link to the survey and post it elsewhere

With the Link tab selected (second from the left), click on the Copy button in the bottom right corner of the screen. If you want a shorter version of the link, click on the field next to Shorten URL so that a check mark appears, and then click Copy . From there, you can paste the link elsewhere, such as on social media, and anyone who clicks on the link can answer your survey.

Embed the survey into your website

This one is a little more complicated, as it involves coding the survey into your personal website or blog. With the Embed HTML tab selected (third from the left), click Copy in the bottom-right corner of the screen. This will copy the survey's code so that it can be embedded in the code of your website or blog, which you'll have to do on the website's host site (such as Wordpress).

how to create a survey for research

  • Main content

Skip navigation

Nielsen Norman Group logo

World Leaders in Research-Based User Experience

Writing good survey questions: 10 best practices.

how to create a survey for research

August 20, 2023 2023-08-20

  • Email article
  • Share on LinkedIn
  • Share on Twitter

Unfortunately, there is no simple formula for cranking out good, unbiased questionnaires.

That said, there are certainly common mistakes in survey design that can be avoided if you know what to look for. Below, I’ve provided the 10 most common and dangerous errors that can be made when designing a survey and guidelines for how to avoid them.

In This Article:

1. ask about the right things, 2. use language that is neutral, natural, and clear, 3. don’t ask respondents to predict behavior, 4. focus on closed-ended questions, 5. avoid double-barreled questions, 6. use balanced scales, 7. answer options should be all-inclusive and mutually exclusive, 8. provide an opt-out, 9. allow most questions to be optional, 10. respect your respondents, ask only questions that you need answered.

One of the easiest traps to fall into when writing a survey is to ask about too much. After all, you want to take advantage of this one opportunity to ask questions of your audience, right?

The most important thing to remember about surveys is to keep them short . Ask only about the things that are essential for answering your research questions. If you don’t absolutely need the information, leave it out.

Don’t Ask Questions that You Can Find the Answer to

When drafting a survey, many researchers slip into autopilot and start by asking a plethora of demographic questions . Ask yourself: do you need all that demographic information? Will you use it to answer your research questions? Even if you will use it, is there another way to capture it besides asking about it in a survey? For example, if you are surveying current customers, and they are providing their email addresses, could you look up their demographic information if needed?

Don’t Ask Questions that Respondents Can’t Answer Accurately

Surveys are best for capturing quantitative attitudinal data . If you’re looking to learn something qualitative or behavioral, there’s likely a method better suited to your needs. Asking the question in a survey is, at best, likely to introduce inefficiency in your process, and, at worst, will produce unreliable or misleading data.

For example, consider the question below:

⛔️

If I were asked this question, I could only speculate about what might make a button stand out. Maybe a large size? Maybe a different color, compared to surrounding content? But this is merely conjecture. The only reliable way to tell if the button actually stood out for me would be to mock up the page and show it to me. This type of question would be better studied with other research methods, such as usability testing or A/B testing , but not with a survey.

Avoid Biasing Respondents

There are endless ways in which bias can be introduced into survey data , and it is the researcher’s task to minimize this bias as much as possible. For example, consider the wording of the following question.

⛔️

By initially providing the context that the organization is committed to achieving a 5-star satisfaction rating , the survey creators are, in essence, pleading with the respondent to give them one. The respondent may feel guilty providing an honest response if they had a less than stellar experience.

Note also the use of the word satisfaction . This wording subtly biases the participant into framing their experience as a satisfactory one.

An alternative wording of the question might remove the first sentence altogether, and simply ask respondents to rate their experience.

Use Natural, Familiar Language

We must always be on the lookout for jargon in survey design. If respondents cannot understand your questions or response options, you will introduce bad data into your dataset. While we should strive to keep survey questions short and simple, it is sometimes necessary to provide brief definitions or descriptions when asking about complex topics, to prevent misunderstanding. Always pilot your questionnaires with the target audience to ensure that all jargon has been removed.

Speak to Respondents Like Humans

For some reason, when drafting a questionnaire, many researchers introduce unnecessary formality and flowery language into their questions. Resist this urge. Phrase questions as clearly and simply as possible, as though you were asking them in an interview format.

People are notoriously unreliable predictors of their own behavior. For various reasons, predictions are almost bound to be flawed, leading Jakob Nielsen to remind us to never listen to users .

Yet, requests for behavioral predictions are rampant in insufficiently thought-out UX surveys. Consider the question:  How likely are you to use this product? While a respondent may feel likely to use a product based on a description or a brief tutorial, their answer does not constitute a reliable prediction and should not be used to make critical product decisions.

Often, instead of future-prediction requests , you will see present-estimate requests : How often do you currently use this product in an average week? While this type of question avoids the problem of predictions, it still is unreliable. Users struggle to estimate based on some imaginary “average” week and will often, instead, recall outlier weeks, which are more memorable.

The best way to phrase a question like this is to ask for specific, recent memories : Approximately how many times did you use this product in the past 7 days? It is important to include the word approximately and to allow for ranges rather than exact numbers. Reporting an exact count of a past behavior is often either challenging or impossible, so asking for it introduces imprecise data. It can also make respondents more likely to drop off if they feel incapable of answering the question accurately.

⛔️

⚠️

Surveys are, at their core, a quantitative research method . They rely upon closed-ended questions (e.g., multiple-choice or rating-scale questions) to generate quantitative data. Surveys can also leverage open-ended questions (e.g., short-answer or long-answer questions) to generate qualitative data. That said, the best surveys rely upon closed-ended questions, with a smattering of open-ended questions to provide additional qualitative color and support to the mostly quantitative data.

If you find that your questionnaire relies overly heavily on open-ended questions, it might be a red flag that another qualitative-research method (e.g., interviews ) may serve your research aims better.

On the subject of open-ended survey questions, it is often wise to include one broad open-ended question at the end of your questionnaire . Many respondents will have an issue or piece of feedback in mind when they start a survey, and they’re simply waiting for the right question to come up. If no such question exists, they may end the survey experience with a bad taste. A final, optional, long-answer question with a prompt like Is there anything else you’d like to share? can help to alleviate this frustration and supply some potentially valuable data.

A double-barreled question asks respondents to answer two things at once. For example: How easy and intuitive was this website to use? Easy and intuitive , while related, are not synonymous, and, therefore, the question is asking the respondent to use a single rating scale to assess the website on two distinct dimensions simultaneously. By necessity, the respondent will either pick one of these words to focus on or try to assess both and estimate a midpoint “average” score. Neither of these will generate fully accurate or reliable data.

Therefore, double-barreled questions should always be avoided and, instead, split up into two separate questions.

⛔️

 

Rating-scale questions are tremendously valuable in generating quantitative data in survey design. Often, a respondent is asked to rate their agreement with a statement on an agreement scale (e.g., Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree ), or otherwise to rate something using a scale of adjectives (e.g., Excellent, Good, Neutral, Fair, Poo r).

You’ll notice that, in both of the examples given above, there is an equal number of positive and negative options (2 each), surrounding a neutral option. The equal number of positive and negative options means that the response scale is balanced and eliminates a potential source of bias or error.

In an unbalanced scale, you’ll see an unequal number of positive and negative options (e.g., Excellent, Very Good, Good, Poor, Very Poor ). This example contains 3 positive options and only 2 negative ones. It, therefore, biases the participant to select a positive option.

⛔️

Answer options for a multiple-choice question should include all possible answers (i.e., all inclusive) and should not overlap (i.e., mutually exclusive). For example, consider the following question:

⛔️

In this formulation, some possible answers are skipped (i.e., anyone who is over 50 won’t be able to select an answer). Additionally, some answers overlap (e.g., a 20-year-old could select either the first or second response).

Always doublecheck your numeric answer options to ensure that all numbers are included and none are repeated .

No matter how carefully and inclusively you craft your questions, there will always be respondents for whom none of the available answers are acceptable. Maybe they are an edge case you hadn’t considered. Maybe they don’t remember the answer. Or maybe they simply don’t want to answer that particular question. Always provide an opt-out answer in these cases to avoid bad data.

Opt-out answers can include things like the following: Not applicable , None of the above , I don’t know, I don’t recall , Other , or Prefer not to answer . Any multiple-choice question should include at least one of these answers. However, avoid the temptation to include one catch-all opt-out answer containing multiple possibilities . For example, an option labeled I don’t know / Not applicable  covers two very different responses with different meanings; combining them fogs your data.

It is so tempting to make questions required in a questionnaire. After all, we want the data! However, the choice to make any individual question required will likely lead to one of two unwanted results:

  • Bad Data: If a respondent is unable to answer a question accurately, but the question is required, they may select an answer at random. These types of answers will be impossible to detect and will introduce bad data into your study, in the form of random-response bias.
  • Dropoffs: The other option available to a participant unable to correctly answer a required question is to abandon the questionnaire. This behavior will increase the effort needed to reach the desired number of responses.

Therefore, before deciding to make any question required, consider if doing so is worth the risks of bad data and dropoffs.

In the field of user experience, we like to say that we are user advocates. That doesn’t just mean advocating for user needs when it comes to product decisions. It also means respecting our users any time we’re fortunate enough to interact with them.

Don’t Assume Negativity

This is particularly important when discussing health issues or disability. Phrasings such Do you suffer from hypertension? may be perceived as offensive. Instead, use objective wording such as Do you have hypertension?

Be Sensitive with Sensitive Topics

When asking about any topics that may be deemed sensitive, private, or offensive, first ask yourself: Does it really need to be asked? Often, we can get plenty of valuable information while omitting that topic.

Other times, it is necessary to delve into potentially sensitive topics. In these cases, be sure to choose your wording carefully. Ensure you’re using the current preferred terminology favored by members of the population you’re addressing. If necessary, consider providing a brief explanation for why you are asking about that particular topic and what benefit will come from responding.

Use Inclusive and Appropriate Wording for Demographic Questions

When asking about topics such as race, ethnicity, sex, or gender identity, use accurate and sensitive terminology. For example, it is no longer appropriate to offer a simple binary option for gender questions. At a minimum, a third option indicating an Other or Non-binary category is expected, as well as an opt-out answer for those that prefer not to respond.

An inclusive question is respectful of your users’ identities and allows them to answer only if they feel comfortable.

Related Courses

Survey design and execution.

Learn how to use surveys to drive and evaluate UX design

ResearchOps: Scaling User Research

Orchestrate and optimize research to amplify its impact

User Interviews

Uncover in-depth, accurate insights about your users

Related Topics

  • Research Methods Research Methods

Learn More:

how to create a survey for research

Success Rate vs. Completion Rate

Tim Neusesser · 4 min

how to create a survey for research

What Is User Research?

Caleb Sponheim · 3 min

how to create a survey for research

Competitive Reviews vs. Competitive Research

Therese Fessenden · 4 min

Related Articles:

Should You Run a Survey?

Maddie Brown · 6 min

10 Survey Challenges and How to Avoid Them

Tanner Kohler · 15 min

User-Feedback Requests: 5 Guidelines

Anna Kaley · 10 min

Rating Scales in UX Research: Likert or Semantic Differential?

Maria Rosala · 7 min

Between-Subjects vs. Within-Subjects Study Design

Raluca Budiu · 8 min

27 Tips and Tricks for Conducting Successful User Research in the Field

Susan Farrell and Mayya Azarova · 5 min

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Ann R Coll Surg Engl
  • v.95(1); 2013 Jan

A quick guide to survey research

1 University of Cambridge,, UK

2 Cambridge University Hospitals NHS Foundation Trust,, UK

Questionnaires are a very useful survey tool that allow large populations to be assessed with relative ease. Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

Despite a widespread perception that surveys are easy to conduct, in order to yield meaningful results, a survey needs extensive planning, time and effort. In this article, we aim to cover the main aspects of designing, implementing and analysing a survey as well as focusing on techniques that would improve response rates.

Clear research goal

The first and most important step in designing a survey is to have a clear idea of what you are looking for. It will always be tempting to take a blanket approach and ask as many questions as possible in the hope of getting as much information as possible. This type of approach does not work as asking too many irrelevant or incoherent questions reduces the response rate 2 and therefore reduces the power of the study. This is especially important when surveying physicians as they often have a lower response rate than the rest of the population. 3 Instead, you must carefully consider the important data you will be using and work on a ‘need to know’ rather than a ‘would be nice to know’ model. 4

After considering the question you are trying to answer, deciding whom you are going to ask is the next step. With small populations, attempting to survey them all is manageable but as your population gets bigger, a sample must be taken. The size of this sample is more important than you might expect. After lost questionnaires, non-responders and improper answers are taken into account, this sample must still be big enough to be representative of the entire population. If it is not big enough, the power of your statistics will drop and you may not get any meaningful answers at all. It is for this reason that getting a statistician involved in your study early on is absolutely crucial. Data should not be collected until you know what you are going to do with them.

Directed questions

After settling on your research goal and beginning to design a questionnaire, the main considerations are the method of data collection, the survey instrument and the type of question you are going to ask. Methods of data collection include personal interviews, telephone, postal or electronic ( Table 1 ).

Advantages and disadvantages of survey methods

Method of data collectionAdvantagesDisadvantages
Personal• Complex questions• Expensive
 • Visual aids can be used• Time inefficient
 • Higher response rates• Training to avoid bias
Telephone• Allows clarification• No visual aids
 • Larger radius than personal• Difficult to develop rapport
 • Less expensive or time consuming 
 • Higher response rates 
Postal• Larger target• Non-response
 • Visual aids (although limited)• Time for data compilation
 • Lower response rates 
Electronic• Larger target• Non-response
 • Visual aids• Not all subjects accessible
 • Quick response 
 • Quick data compilation 
 • Lower response rates 

Collected data are only useful if they convey information accurately and consistently about the topic in which you are interested. This is where a validated survey instrument comes in to the questionnaire design. Validated instruments are those that have been extensively tested and are correctly calibrated to their target. They can therefore be assumed to be accurate. 1 It may be possible to modify a previously validated instrument but you should seek specialist advice as this is likely to reduce its power. Examples of validated models are the Beck Hopelessness Scale 5 or the Addenbrooke’s Cognitive Examination. 6

The next step is choosing the type of question you are going to ask. The questionnaire should be designed to answer the question you want answered. Each question should be clear, concise and without bias. Normalising statements should be included and the language level targeted towards those at the lowest educational level in your cohort. 1 You should avoid open, double barrelled questions and those questions that include negative items and assign causality. 1 The questions you use may elicit either an open (free text answer) or closed response. Open responses are more flexible but require more time and effort to analyse, whereas closed responses require more initial input in order to exhaust all possible options but are easier to analyse and present.

Questionnaire

Two more aspects come into questionnaire design: aesthetics and question order. While this is not relevant to telephone or personal questionnaires, in self-administered surveys the aesthetics of the questionnaire are crucial. Having spent a large amount of time fine-tuning your questions, presenting them in such a way as to maximise response rates is pivotal to obtaining good results. Visual elements to think of include smooth, simple and symmetrical shapes, soft colours and repetition of visual elements. 7

Once you have attracted your subject’s attention and willingness with a well designed and attractive survey, the order in which you put your questions is critical. To do this you should focus on what you need to know; start by placing easier, important questions at the beginning, group common themes in the middle and keep questions on demographics to near the end. The questions should be arrayed in a logical order, questions on the same topic close together and with sensible sections if long enough to warrant them. Introductory and summary questions to mark the start and end of the survey are also helpful.

Pilot study

Once a completed survey has been compiled, it needs to be tested. The ideal next step should highlight spelling errors, ambiguous questions and anything else that impairs completion of the questionnaire. 8 A pilot study, in which you apply your work to a small sample of your target population in a controlled setting, may highlight areas in which work still needs to be done. Where possible, being present while the pilot is going on will allow a focus group-type atmosphere in which you can discuss aspects of the survey with those who are going to be filling it in. This step may seem non-essential but detecting previously unconsidered difficulties needs to happen as early as possible and it is important to use your participants’ time wisely as they are unlikely to give it again.

Distribution and collection

While it should be considered quite early on, we will now discuss routes of survey administration and ways to maximise results. Questionnaires can be self-administered electronically or by post, or administered by a researcher by telephone or in person. The advantages and disadvantages of each method are summarised in Table 1 . Telephone and personal surveys are very time and resource consuming whereas postal and electronic surveys suffer from low response rates and response bias. Your route should be chosen with care.

Methods for maximising response rates for self-administered surveys are listed in Table 2 , taken from a Cochrane review.2 The differences between methods of maximising responses to postal or e-surveys are considerable but common elements include keeping the questionnaire short and logical as well as including incentives.

Methods for improving response rates in postal and electronic questionnaires 2

PostalElectronic
Monetary or non-monetary incentivesNon-monetary incentives
Teaser on the envelopePersonalised questionnaires
Pre-notificationInclude pictures
Follow-up with another copy includedNot including ‘survey’ in subject line
Handwritten addressesMale signature
University sponsorshipWhite background
Use recorded deliveryShort questionnaire
Include return envelopeOffer of results
Avoid sensitive questionsStatement that others have responded
  • – Involve a statistician early on.
  • – Run a pilot study to uncover problems.
  • – Consider using a validated instrument.
  • – Only ask what you ‘need to know’.
  • – Consider guidelines on improving response rates.

The collected data will come in a number of forms depending on the method of collection. Data from telephone or personal interviews can be directly entered into a computer database whereas postal data can be entered at a later stage. Electronic questionnaires can allow responses to go directly into a computer database. Problems arise from errors in data entry and when questionnaires are returned with missing data fields. As mentioned earlier, it is essential to have a statistician involved from the beginning for help with data analysis. He or she will have helped to determine the sample size required to ensure your study has enough power. The statistician can also suggest tests of significance appropriate to your survey, such as Student’s t-test or the chi-square test.

Conclusions

Survey research is a unique way of gathering information from a large cohort. Advantages of surveys include having a large population and therefore a greater statistical power, the ability to gather large amounts of information and having the availability of validated models. However, surveys are costly, there is sometimes discrepancy in recall accuracy and the validity of a survey depends on the response rate. Proper design is vital to enable analysis of results and pilot studies are critical to this process.

How to write a survey introduction that will inspire people to participate

  • 11 min read

What is a survey introduction—and what is its purpose?

1. the importance of a compelling introduction, 2. understand the audience, 3. personalization, 4. clear and concise language, 5. survey timing, 6. incentives and rewards, 7. privacy and data security, 8. contact information, 9. testing and feedback, 10. adapting to different survey types, 11. visual appeal, 12. a/b testing, 13. follow-up surveys, 14. compliance with ethical guidelines, 15. analyzing introduction performance, 16. continuous improvement, survey introduction example: a template for any type of research, introduction to a customer satisfaction survey, introduction to a market survey, student survey introduction sample, introduction to an employee survey, introduction for a research paper survey, introduction to a survey report, additional tips for creating the best survey introduction.

Creating a good introduction for a survey is a crucial part of successful research. Its quality will greatly impact the process. It will improve the end result, including survey completion rates and response accuracy.

A questionnaire introduction provides the chance to introduce yourself and the topic being explored to respondents. It is also a chance to assure them that their personal information will be kept safe and explain how they will benefit from completing the survey.

This article explores how to write a survey introduction, discusses its importance, and provides valuable, ready-to-use questionnaire introduction examples.

A questionnaire introduction is a short body of text appearing on the user’s screen at the beginning of a survey. It is the first contact point between you and potential participants prior to respondents seeing any of the survey questions .

This block of text sets up the level of cooperation that will be forthcoming from the person reading it. You need to convince them to participate by providing valuable information about the survey.

This includes the research topic, the expected time it will take to complete the survey, how responses will be processed, and why it’s in someone’s interest to take the time to complete it. The survey introduction can be in the body of an email or on the first slide of the survey.

Based on the introduction, potential respondents will decide whether to participate in the survey. It is an overall description of the survey, the equivalent of the abstract in a dissertation or other research paper.

How to write survey introduction text well

After spending days or even months making the perfect survey , you probably know it like the palm of your hand. However, it’s important to take time to better understand a respondent’s initial reaction to it—they may not be familiar with the topic at all.

As with every stage of the survey-making process, respondents’ perspectives have to be kept in mind and efforts undertaken to make their experience easy and worthwhile.

Here are 16 simple steps on how to write a survey introduction text to make it engaging.

The introduction in survey questionnaires serves as the gateway to a successful survey. A compelling one not only grabs the attention of respondents but also sets the tone for the entire surveying process. A well-framed introduction ensures that respondents understand the purpose and relevance of the survey, making them more inclined to complete it. Essentially, a thoughtful introduction can heavily influence the overall response rate and the quality of data collected.

Every survey is designed for a specific demographic or audience. Understanding them, and what drives them, allows for a tailored introduction that resonates. For instance, a survey meant for teenagers requires a different tone and approach than one aimed at senior citizens. By empathizing with the audience’s perspective, one can craft an introduction that speaks directly to their interests and motivations.

In today’s digital age, consumers appreciate distinctive touches. Personalizing a survey introduction, whether through addressing the respondent by name or referring to past interactions, adds a layer of authenticity. It gives the respondent a feeling of being valued and recognized, which can translate into a higher likelihood of participation.

Clarity is paramount in any communication. A good introduction for a questionnaire is vital in ensuring that respondents immediately understand the survey’s purpose and what’s expected of them. Avoid industry jargon or overly complex sentences. Instead, opt for straightforward and concise language that communicates the essentials without overwhelming or confusing respondents.

Timing can be a determining factor in the success of a survey. For instance, sending a customer satisfaction survey immediately after a purchase or service experience ensures the interaction is fresh in the respondent’s mind, leading to more accurate and detailed feedback. On the other hand, ill-timed surveys may come across as irrelevant or intrusive.

Motivation is a powerful tool. Offering respondents a tangible incentive—whether it’s a discount, gift card, or entry into a prize draw—can significantly boost participation rates. However, it’s essential that these incentives are relevant and appealing to the target audience and then delivered as promised.

With increasing concerns about data privacy, assuring respondents that their information’s safety is non-negotiable is vital. An introduction should clearly outline the measures taken to protect personal information and how the data being collected in the survey will be used. Being transparent about compliance with regulations like GDPR will instill confidence and trust in respondents.

Including contact details in the survey introduction can be a game-changer. It not only offers a channel for respondents to voice concerns or seek clarifications but also communicates transparency and openness. This proactive approach can lead to increased trust and a willingness to participate.

Like any piece of content, an introduction for a questionnaire benefits from testing. Running it by a small group—preferably from the target demographic—and seeking feedback can highlight potential areas for improvement. This iterative process ensures the introduction is optimized for its main audience.

Different surveys serve different purposes and their introductions should reflect this variance. An employee feedback survey will require a different tone and set of assurances than a market research questionnaire. Tailoring the introduction to the survey’s unique context ensures that it will resonate with potential respondents.

The aesthetics of a survey introduction can influence a respondent’s decision to proceed. Utilizing a clean, intuitive design incorporating on-brand colors and images can create an inviting and professional first impression. It’s essential to ensure the design enhances the content—as opposed to distracting from it.

Refinement is the key to perfection. A/B testing, in which two different introductions are presented to separate groups of respondents, can provide insights into which one performs better. This data-driven approach ensures that the introduction is continually optimized based on real-world feedback.

Gathering feedback is an ongoing process. Follow-up surveys, sent after the initial one, can delve deeper into specific topics or measure changes in opinions over time. However, their introduction needs to acknowledge the prior interaction and explain the rationale for a subsequent survey.

Conducting surveys isn’t just about gathering data, it’s about doing so ethically and responsibly. Ethical considerations, including informed consent and participant rights, should be highlighted in the introduction. This ensures participants are aware of their privileges and fosters a culture of respect.

After deploying a survey, it’s crucial to evaluate the introduction’s efficacy. By examining metrics like response rate, drop-off rate, and feedback, insights can be gained regarding the introduction’s strengths and the areas needing improvement. This analysis forms the foundation for future refinements.

The art of crafting survey introductions is one of continuous learning. As markets evolve and respondents’ preferences change, so should the survey approach. By staying adaptive and open to feedback, researchers can ensure their introductions remain effective and engaging.

Based on the checklist above, here is a survey introduction email example that fulfills all the requirements that will act as the perfect first contact with potential respondents.

  • Hey there, we would like to hear about your recent customer service experience!
  • At [company name], your satisfaction is what we value most. By participating in our survey, you will help us improve our products and offer you even better service.
  • This five-question survey takes only ten minutes to complete and is available until the 28th of November.
  • It is anonymous. The data gathered will only be used internally to improve our future customer service strategies.
  • Click below to access the survey. If you have any additional questions, feel free to contact us at support@company.com . We appreciate your feedback!

The wording of a questionnaire introduction and the information that is included can differ based on the field of research. Check out our survey introduction examples and choose an introduction sample best suited to your needs.

A customer satisfaction survey introduction is likewise an important part of customer experience research. The wording will have a huge impact on whether customers will take the time to answer—or just ignore it.

If surveying recent customer experience, send a survey shortly after customers purchased a product or had contact with the customer support team while the experience is still fresh in their mind.

Stay true to your company’s tone of voice and let respondents know that you appreciate their patronage. An incentive that encourages them to participate can also be offered. Here is a survey intro example:

Thank you for shopping at [company name]! We would like to ask you a few questions to learn about your shopping experience.

This survey will take only a couple of minutes and will be very valuable for improving the services we offer to you. The responses you give will stay anonymous.

Click below to participate, which will get you 30 percent off your next order!

Market research surveys are conducted to get more information about the situation in a specific economic sector and provide valuable real-time insights into the needs of a target audience and how the competition is doing.

Conducting product surveys can help improve existing products or make adjustments before releasing new products or services. Simply put, market research surveys help expand and grow a business.

When doing this kind of research, it is important to determine the target audience. If they are not yet customers, they may not be familiar with your brand, so make sure to introduce it properly and explain why they have been chosen for this research. Here is an example:

  • Nice to meet you! We are [company name], and we are working on bringing affordable [your products] to the world.
  • Our company aims to develop the best possible products for our customers, and we need your opinion to make this happen.
  • Wondering why we chose you? We are looking for [describe your target audience], which makes you a perfect fit.
  • We would appreciate it if you took the time to answer this five-minute survey. It is anonymous, and your data will be used only for this research.
  • Click below to fill out our survey and get 10 percent off our newest collection!

Student surveys are an important part of education surveys . With them, feedback is garnered from students regarding teachers, courses, curriculum, extracurricular activities, and much more.

Measuring students’ satisfaction levels helps highlight the strengths and weaknesses of a school, which in turn helps improve decision-making. However, in order to get accurate responses, certain steps are required, including how the introduction is written.

When making surveys for students, ensure they are anonymous. Many students may be afraid of retaliation, which will make them reluctant to give honest opinions.

Emphasize their anonymity in the introduction. Explain why this research is being carried out and how the gathered data will be used. Here is an example of a student questionnaire survey introduction:

  • Thank you for being one of our students at [name of your school]. Please take the time to answer this short five-minute survey and let us know how satisfied you are with your chosen courses from this semester.
  • This survey is anonymous, so feel free to answer honestly. It will very much improve the accuracy of our data and help us improve the curriculum as best as possible.

Conducting human resource surveys can greatly improve a workplace, which will result in happier and more productive employees. Find out about the work-life balance of employees and the overall company culture, measure the motivation and engagement of employees, and learn how satisfied they are with their jobs.

When writing the survey introduction, focus on the same aspects as above. Emphasize that the survey is anonymous and communicate this openly to employees. This will encourage them to share their honest opinions and help gather valuable and accurate responses.

Some research papers require conducting surveys on a particular topic. Writing a research questionnaire introduction for a research paper is no different than writing one for the previously mentioned purposes.

Introduce yourself and the topic to respondents and explain the purpose of the research and the benefit to them for participating. Include other information about the survey that you think is needed, though make sure to not overdo it. Keep it short and simple for high survey completion rates.

Writing a survey report is one of the seven steps of conducting survey research . It is the last one after the data analysis and is crucial to presenting findings.

A survey report introduction is very important for the success of a report. Its purpose is to introduce readers or listeners to the topic and the ultimate findings of the research.

The same advice applies: keep it short, use simple language, and incorporate only the most important information.

And above all, put yourself in the shoes of the audience. Unlike you, they have not been spending months with the survey and supporting material.

Good survey introductions help increase response rates and gain respondents’ trust. They are a perfect way for respondents to get to know you better, as well as the research topic and the ways they can benefit from it.

Here are some additional tips to create the best survey introductions, regardless of the topic of your research:

  • Make the survey anonymous and make sure respondents are aware of that.
  • Add a logo to the survey to increase brand recognition.
  • Don’t forget to keep the tone of voice on-brand.
  • If brand identity allows it, use a familiar tone.
  • Offer incentives for survey completion.
  • Thank the respondents for completing the survey.

Of course, before writing a survey introduction, you need to create the survey. With our help, amazing questionnaires can be made in no time. Sign up to Survey Planet today, create a survey for free, and add a well-written introduction using our tips!

Photo by Bench Accounting on Unsplash

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

13.1 Writing effective survey questions and questionnaires

Learning objectives.

Learners will be able to…

  • Describe some of the ways that survey questions might confuse respondents and how to word questions and responses clearly
  • Create mutually exclusive, exhaustive, and balanced response options
  • Define fence-sitting and floating
  • Describe the considerations involved in constructing a well-designed questionnaire
  • Discuss why pilot testing is important

In the previous chapter, we reviewed how researchers collect data using surveys. Guided by their sampling approach and research context, researchers should choose the survey approach that provides the most favorable tradeoffs in strengths and challenges. With this information in hand, researchers need to write their questionnaire and revise it before beginning data collection. Each method of delivery requires a questionnaire, but they vary a bit based on how they will be used by the researcher. Since phone surveys are read aloud, researchers will pay more attention to how the questionnaire sounds than how it looks. Online surveys can use advanced tools to require the completion of certain questions, present interactive questions and answers, and otherwise afford greater flexibility in how questionnaires are designed. As you read this chapter, consider how your method of delivery impacts the type of questionnaire you will design.

how to create a survey for research

Start with operationalization

The first thing you need to do to write effective survey questions is identify what exactly you wish to know. As silly as it sounds to state what seems so completely obvious, we can’t stress enough how easy it is to forget to include important questions when designing a survey. Begin by looking at your research question and refreshing your memory of the operational definitions you developed for those variables from Chapter 11. You should have a pretty firm grasp of your operational definitions before starting the process of questionnaire design. You may have taken those operational definitions from other researchers’ methods, found established scales and indices for your measures, or created your own questions and answer options.

TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS)

STOP! Make sure you have a complete operational definition for the dependent and independent variables in your research question. A complete operational definition contains the variable being measured, the measure used, and how the researcher interprets the measure. Let’s make sure you have what you need from Chapter 11 to begin writing your questionnaire.

List all of the dependent and independent variables in your research question.

  • It’s normal to have one dependent or independent variable. It’s also normal to have more than one of either.
  • Make sure that your research question (and this list) contain all of the variables in your hypothesis. Your hypothesis should only include variables from you research question.

For each variable in your list:

  • If you don’t have questions and answers finalized yet, write a first draft and revise it based on what you read in this section.
  • If you are using a measure from another researcher, you should be able to write out all of the questions and answers associated with that measure. If you only have the name of a scale or a few questions, you need to access to the full text and some documentation on how to administer and interpret it before you can finish your questionnaire.
  • For example, an interpretation might be “there are five 7-point Likert scale questions…point values are added across all five items for each participant…and scores below 10 indicate the participant has low self-esteem”
  • Don’t introduce other variables into the mix here. All we are concerned with is how you will measure each variable by itself. The connection between variables is done using statistical tests, not operational definitions.
  • Detail any validity or reliability issues uncovered by previous researchers using the same measures. If you have concerns about validity and reliability, note them, as well.

TRACK 2 (IF YOU  AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS)

You are interested in researching the decision-making processes of parents of elementary-aged children during the beginning of the COVID-19 pandemic in 2020. Specifically, you want to if and how parents’ socioeconomic class impacted their decisions about whether to send their children to school in-person or instead opt for online classes or homeschooling.

  • Create a working research question for this topic.
  • What is the dependent variable in this research question? The independent variable? What other variables might you want to control?

For the independent variable, dependent variable, and at least one control variable from your list:

  • What measure (the specific question and answers) might you use for each one? Write out a first draft based on what you read in this section.

If you completed the exercise above and listed out all of the questions and answer choices you will use to measure the variables in your research question, you have already produced a pretty solid first draft of your questionnaire! Congrats! In essence, questionnaires are all of the self-report measures in your operational definitions for the independent, dependent, and control variables in your study arranged into one document and administered to participants. There are a few questions on a questionnaire (like name or ID#) that are not associated with the measurement of variables. These are the exception, and it’s useful to think of a questionnaire as a list of measures for variables. Of course, researchers often use more than one measure of a variable (i.e., triangulation ) so they can more confidently assert that their findings are true. A questionnaire should contain all of the measures researchers plan to collect about their variables by asking participants to self-report.

Sticking close to your operational definitions is important because it helps you avoid an everything-but-the-kitchen-sink approach that includes every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your participants to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you actually plan to use in your analysis. For each question in your questionnaire, ask yourself how this question measures a variable in your study. An operational definition should contain the questions, response options, and how the researcher will draw conclusions about the variable based on participants’ responses.

how to create a survey for research

Writing questions

So, almost all of the questions on a questionnaire are measuring some variable. For many variables, researchers will create their own questions rather than using one from another researcher. This section will provide some tips on how to create good questions to accurately measure variables in your study. First, questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. As I’ve mentioned earlier, your survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve. Pilot testing the questionnaire with friends or colleagues can help identify these issues. This process is commonly called pretesting, but to avoid any confusion with pretesting in experimental design, we refer to it as pilot testing.

Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experienced the events, behaviors, or feelings you are asking them to report. If you are asking participants for second-hand knowledge—asking clinicians about clients’ feelings, asking teachers about students’ feelings, and so forth—you may want to clarify that the variable you are asking about is the key informant’s perception of what is happening in the target population. A well-planned sampling approach ensures that participants are the most knowledgeable population to complete your survey.

If you decide that you do wish to include questions about matters with which only a portion of respondents will have had experience, make sure you know why you are doing so. For example, if you are asking about MSW student study patterns, and you decide to include a question on studying for the social work licensing exam, you may only have a small subset of participants who have begun studying for the graduate exam or took the bachelor’s-level exam. If you decide to include this question that speaks to a minority of participants’ experiences, think about why you are including it. Are you interested in how studying for class and studying for licensure differ? Are you trying to triangulate study skills measures? Researchers should carefully consider whether questions relevant to only a subset of participants is likely to produce enough valid responses for quantitative analysis.

Many times, questions that are relevant to a subsample of participants are conditional on an answer to a previous question. A participant might select that they rent their home, and as a result, you might ask whether they carry renter’s insurance. That question is not relevant to homeowners, so it would be wise not to ask them to respond to it. In that case, the question of whether someone rents or owns their home is a filter question , designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Figure 13.1 presents an example of how to accomplish this on a paper survey by adding instructions to the participant that indicate what question to proceed to next based on their response to the first one. Using online survey tools, researchers can use filter questions to only present relevant questions to participants.

example of filter question, with a yes answer meaning you had to answer more questions

Researchers should eliminate questions that ask about things participants don’t know to minimize confusion. Assuming the question is relevant to the participant, other sources of confusion come from how the question is worded. The use of negative wording can be a source of potential confusion. Taking the question from Figure 13.1 about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This is a double negative, and it’s not clear how to answer the question accurately. It is a good idea to avoid negative phrasing, when possible. For example, “did you not drink alcohol during your first semester of college?” is less clear than “did you drink alcohol your first semester of college?”

Another 877777771`issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience , they would find it difficult to link those words to the concepts from David Elkind’s theory. The words you use in your questions must be understandable to your participants. If you find yourself using jargon or slang, break it down into terms that are more universal and easier to understand.

Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question . Figure 13.2 shows a double-barreled question. Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also more boring than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.

Double-barreled question asking more than one thing at a time.

Another thing to avoid when constructing survey questions is the problem of social desirability . We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 11. )

Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college for our research project. We all know that cheating on exams is generally frowned upon (at least I hope we all know this). So, it may be difficult to get people to admit to cheating on a survey. But if you can guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) [1] offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Evaluate the following:

  • Is the question too general? Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, and if that person selected “extremely well,” what do they mean? Instead, ask more specific behavioral questions, such as “Will you recommend this book to others?” or “Do you plan to read other books by the same author?” 
  • Is the question too detailed? Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
  • Is the question presumptuous? Does your question make assumptions? For instance, if you ask, “what do you think the benefits of a tax cut would be?” you are presuming that the participant sees the tax cut as beneficial. But many people may not view tax cuts as beneficial. Some might see tax cuts as a precursor to less funding for public schools and fewer public services such as police, ambulance, and fire department. Avoid questions with built-in presumptions.
  • Does the question ask the participant to imagine something? Is the question imaginary? A popular question on many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most participants have never been faced with this large amount of money and have never thought about this scenario. In fact, most don’t even know that after taxes, the value of the million dollars will be greatly reduced. In addition, some game shows spread the amount over a 20-year period. Without understanding this “imaginary” situation, participants may not have the background information necessary to provide a meaningful response.

Try to step outside your role as researcher for a second, and imagine you were one of your participants. Use the following prompts to evaluate your draft questions from the previous exercise:

Cultural considerations

When researchers write items for questionnaires, they must be conscientious to avoid culturally biased questions that may be inappropriate or difficult for certain populations.

[insert information related to asking about demographics and how this might make some people uncomfortable based on their identity(ies) and how to potentially address]

You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). When I first moved to southwest Virginia, I didn’t know what a holler was. Where I grew up in New Jersey, to holler means to yell. Even then, in New Jersey, we shouted and screamed, but we didn’t holler much. In southwest Virginia, my home at the time, a holler also means a small valley in between the mountains. If I used holler in that way on my survey, people who live near me may understand, but almost everyone else would be totally confused.

Testing questionnaires before using them

Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.

In sum, in order to pose effective survey questions, researchers should do the following:

  • Identify how each question measures an independent, dependent, or control variable in their study.
  • Keep questions clear and succinct.
  • Make sure respondents have relevant lived experience to provide informed answers to your questions.
  • Use filter questions to avoid getting answers from uninformed participants.
  • Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, and pose more than one question at a time.
  • Imagine how respondents would feel responding to questions.
  • Get feedback, especially from people who resemble those in the researcher’s sample.

Table 13.1 offers one model for writing effective questionnaire items.

Table 13.1 The BRUSO model of writing effective questionnaire items, with examples from a perceptions of gun ownership questionnaire
“Are you now or have you ever been the possessor of a firearm?” Have you ever possessed a firearm?
“Who did you vote for in the last election?” Note: Only include items that are relevant to your study.
“Are you a gun person?” Do you currently own a gun?”
How much have you read about the new gun control measure and sales tax?” “How much have you read about the new sales tax on firearm purchases?”
“How much do you support the beneficial new gun control measure?” “What is your view of the new gun control measure?”

Let’s complete a first draft of your questions.

  • In the first exercise, you wrote out the questions and answers for each measure of your independent and dependent variables. Evaluate each question using the criteria listed above on effective survey questions.
  • Type out questions for your control variables and evaluate them, as well. Consider what response options you want to offer participants.

Now, let’s revise any questions that do not meet your standards!

  •  Use the BRUSO model in Table 13.1 for an illustration of how to address deficits in question wording. Keep in mind that you are writing a first draft in this exercise, and it will take a few drafts and revisions before your questions are ready to distribute to participants.
  • In the first exercise, you wrote out the question and answers for your independent, dependent, and at least one control variable. Evaluate each question using the criteria listed above on effective survey questions.
  •  Use the BRUSO model in Table 13.1 for an illustration of how to address deficits in question wording. In real research, it will take a few drafts and revisions before your questions are ready to distribute to participants.

how to create a survey for research

Writing response options

While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people completing your questionnaire. Generally, respondents will be asked to choose a single (or best) response to each question you pose. We call questions in which the researcher provides all of the response options closed-ended questions . Keep in mind, closed-ended questions can also instruct respondents to choose multiple response options, rank response options against one another, or assign a percentage to each response option. But be cautious when experimenting with different response options! Accepting multiple responses to a single question may add complexity when it comes to quantitatively analyzing and interpreting your data.

Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher. This is particularly important for mixed-methods research. It is possible to analyze open-ended response options quantitatively using content analysis (i.e., counting how often a theme is represented in a transcript looking for statistical patterns). However, for most researchers, qualitative data analysis will be needed to analyze open-ended questions, and researchers need to think through how they will analyze any open-ended questions as part of their data analysis plan. Open-ended questions cannot be operationally defined because you don’t know what responses you will get. We will address qualitative data analysis in greater detail in Chapter 19.

To write an effective response options for closed-ended questions, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive . Look back at Figure 13.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive . In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 13.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options, and every respondent fits into one of the response options we provided.

Earlier in this section, we discussed double-barreled questions. Response options can also be double barreled, and this should be avoided. Figure 13.3 is an example of a question that uses double-barreled response options. Other tips about questions are also relevant to response options, including that participants should be knowledgeable enough to select or decline a response option as well as avoiding jargon and cultural idioms.

Double-barreled response options providing more than one answer for each option

Even if you phrase questions and response options clearly, participants are influenced by how many response options are presented on the questionnaire. For Likert scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:

Unlikely  |  Somewhat Likely  |  Likely  |  Very Likely  |  Extremely Likely

Because we have four rankings of likely and only one ranking of unlikely, the scale is unbalanced and most responses will be biased toward “likely” rather than “unlikely.” A balanced version might look like this:

Extremely Unlikely  |  Somewhat Unlikely  |  As Likely as Not  |  Somewhat Likely  | Extremely Likely

In this example, the midpoint is halfway between likely and unlikely. Of course, a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters , on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion. 

As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose one side or another (e.g., agree or disagree) without a middle option (e.g., neither agree nor disagree) or to not include an option like “don’t know enough to say” or “not applicable.” There is no always-correct solution to either problem. But in general, including middle option in a response set provides a more exhaustive set of response options than one that excludes one. 

==This came from 10.3 under “Measuring unidimensional concepts” but it seems more appropriate in the chapter about writing survey questions. We need to make sure this section flows well. Maybe there should be a better organized subsection on rating scales?  Where does this go? Does it need any revision?===

The number of response options on a typical rating scale is usually five or seven, though it can range from three to 11. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993). [2] Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in Figure 10.1 is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.

Finalizing Response Options

The most important check before your finalize your response options is to align them with your operational definitions. As we’ve discussed before, your operational definitions include your measures (questions and responses options) as well as how to interpret those measures in terms of the variable being measured. In particular, you should be able to interpret all response options to a question based on your operational definition of the variable it measures. If you wanted to measure the variable “social class,” you might ask one question about a participant’s annual income and another about family size. Your operational definition would need to provide clear instructions on how to interpret response options. Your operational definition is basically like this social class calculator from Pew Research , though they include a few more questions in their definition.

To drill down a bit more, as Pew specifies in the section titled “how the income calculator works,” the interval/ratio data respondents enter is interpreted using a formula combining a participant’s four responses to the questions posed by Pew categorizing their household into three categories—upper, middle, or lower class. So, the operational definition includes the four questions comprising the measure and the formula or interpretation which converts responses into the three final categories that we are familiar with: lower, middle, and upper class.

It’s perfectly normal for operational definitions to change levels of measurement, and it’s also perfectly normal for the level of measurement to stay the same. The important thing is that each response option a participant can provide is accounted for by the operational definition. Throw any combination of family size, location, or income at the Pew calculator, and it will define you into one of those three social class categories.

Unlike Pew’s definition, the operational definitions in your study may not need their own webpage to define and describe. For many questions and answers, interpreting response options is easy. If you were measuring “income” instead of “social class,” you could simply operationalize the term by asking people to list their total household income before taxes are taken out. Higher values indicate higher income, and lower values indicate lower income. Easy. Regardless of whether your operational definitions are simple or more complex, every response option to every question on your survey (with a few exceptions) should be interpretable using an operational definition of a variable. Just like we want to avoid an everything-but-the-kitchen-sink approach to questions on our questionnaire, you want to make sure your final questionnaire only contains response options that you will use in your study.

One note of caution on interpretation (sorry for repeating this). We want to remind you again that an operational definition should not mention more than one variable. In our example above, your operational definition could not say “a family of three making under $50,000 is lower class; therefore, they are more likely to experience food insecurity.” That last clause about food insecurity may well be true, but it’s not a part of the operational definition for social class. Each variable (food insecurity and class) should have its own operational definition. If you are talking about how to interpret the relationship between two variables, you are talking about your data analysis plan . We will discuss how to create your data analysis plan beginning in Chapter 14 . For now, one consideration is that depending on the statistical test you use to test relationships between variables, you may need nominal, ordinal, or interval/ratio data. Your questions and response options should match the level of measurement you need with the requirements of the specific statistical tests in your data analysis plan. Once you finalize your data analysis plan, return to your questionnaire to confirm the level of measurement matches with the statistical test you’ve chosen.

In summary, to write effective response options researchers should do the following:

  • Avoid wording that is likely to confuse respondents—including double negatives, use culturally specific terms or jargon, and double-barreled response options.
  • Ensure response options are relevant to participants’ knowledge and experience so they can make an informed and accurate choice.
  • Present mutually exclusive and exhaustive response options.
  • Consider fence-sitters and floaters, and the use of neutral or “not applicable” response options.
  • Define how response options are interpreted as part of an operational definition of a variable.
  • Check level of measurement matches operational definitions and the statistical tests in the data analysis plan (once you develop one in the future)

Look back at the response options you drafted in the previous exercise. Make sure you have a first draft of response options for each closed-ended question on your questionnaire.

  • Using the criteria above, evaluate the wording of the response options for each question on your questionnaire.
  • Revise your questions and response options until you have a complete first draft.
  • Do your first read-through and provide a dummy answer to each question. Make sure you can link each response option and each question to an operational definition.

Look back at the response options you drafted in the previous exercise.

From this discussion, we hope it is clear why researchers using quantitative methods spell out all of their plans ahead of time. Ultimately, there should be a straight line from operational definition through measures on your questionnaire to the data analysis plan. If your questionnaire includes response options that are not aligned with operational definitions or not included in the data analysis plan, the responses you receive back from participants won’t fit with your conceptualization of the key variables in your study. If you do not fix these errors and proceed with collecting unstructured data, you will lose out on many of the benefits of survey research and face overwhelming challenges in answering your research question.

how to create a survey for research

Designing questionnaires

Based on your work in the previous section, you should have a first draft of the questions and response options for the key variables in your study. Now, you’ll also need to think about how to present your written questions and response options to survey respondents. It’s time to write a final draft of your questionnaire and make it look nice. Designing questionnaires takes some thought. First, consider the route of administration for your survey. What we cover in this section will apply equally to paper and online surveys, but if you are planning to use online survey software, you should watch tutorial videos and explore the features of of the survey software you will use.

Informed consent & instructions

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000) . [3] One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus, the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent . Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and other ethical considerations we covered in Chapter 6. Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

Organizing items to be easy and intuitive to follow

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last. This can be because they are easy to answer in the event respondents have become tired or bored, because they are least interesting to participants, or because they can raise concerns for respondents from marginalized groups who may see questions about their identities as a potential red flag. Of course, any survey should end with an expression of appreciation to the respondent.

Questions are often organized thematically. If our survey were measuring social class, perhaps we’d have a few questions asking about employment, others focused on education, and still others on housing and community resources. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about parents’ income and then present a series of questions about estimated future income. Grouping by theme is one way to be deliberate about how you present your questions. Keep in mind that you are surveying people, and these people will be trying to follow the logic in your questionnaire. Jumping from topic to topic can give people a bit of whiplash and may make participants less likely to complete it.

Using a matrix is a nice way of streamlining response options for similar questions. A matrix is a question type that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 13.4.

Survey using matrix options--between agree and disagree--and opinions about class

Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will want to make respondents continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). [4] In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. On the other hand, if your survey deals with some very sensitive topic, such as child sexual abuse or criminal convictions, you don’t want to scare respondents away or shock them by beginning with your most intrusive questions.

Your participants are human. They will react emotionally to questionnaire items, and they will also try to uncover your research questions and hypotheses. In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research. When feasible, you should consult with key informants from your target population determine how best to order your questions. If it is not feasible to do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions. None of your decisions will be perfect, and all studies have limitations.

Questionnaire length

You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.

Second, and perhaps more important, how long are respondents likely to be willing to spend completing your questionnaire? If you are studying college students, asking them to use their very limited time to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you ask them to complete your survey during down-time between classes and there is little work to be done, students may be willing to give you a bit more of their time. Think about places and times that your sampling frame naturally gathers and whether you would be able to either recruit participants or distribute a survey in that context. Estimate how long your participants would reasonably have to complete a survey presented to them during this time. The more you know about your population (such as what weeks have less work and more free time), the better you can target questionnaire length.

The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), [5] whereas others suggest that up to 20 minutes is acceptable (Hopper, 2010). [6] As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire. For example, if you planned to distribute your questionnaire to students in between classes, you will need to make sure it is short enough to complete before the next class begins.

When designing a questionnaire, a researcher should consider:

  • Weighing strengths and limitations of the method of delivery, including the advanced tools in online survey software or the simplicity of paper questionnaires.
  • Grouping together items that ask about the same thing.
  • Moving any questions about sensitive items to the end of the questionnaire, so as not to scare respondents off.
  • Moving any questions that engage the respondent to answer the questionnaire at the beginning, so as not to bore them.
  • Timing the length of the questionnaire with a reasonable length of time you can ask of your participants.
  • Dedicating time to visual design and ensure the questionnaire looks professional.

Type out a final draft of your questionnaire in a word processor or online survey tool.

  • Evaluate your questionnaire using the guidelines above, revise it, and get it ready to share with other student researchers.
  • Take a look at the question drafts you have completed and decide on an order for your questions. E valuate your draft questionnaire using the guidelines above, and revise as needed.

how to create a survey for research

Pilot testing and revising questionnaires

A good way to estimate the time it will take respondents to complete your questionnaire (and other potential challenges) is through pilot testing . Pilot testing allows you to get feedback on your questionnaire so you can improve it before you actually administer it. It can be quite expensive and time consuming if you wish to pilot test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pilot testing with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By pilot testing your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time pilot testers as they take your survey. This will give you a good idea about the estimate to provide respondents when you administer your survey and whether you have some wiggle room to add additional items or need to cut a few items.

Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. If you are using an online survey, ensure that participants can complete it via mobile, computer, and tablet devices. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions. While online survey tools automate much of visual design, word processors are designed for writing all kinds of documents and may need more manual adjustment as part of visual design.

Realistically, your questionnaire will continue to evolve as you develop your data analysis plan over the next few chapters. By now, you should have a complete draft of your questionnaire grounded in an underlying logic that ties together each question and response option to a variable in your study. Once your questionnaire is finalized, you will need to submit it for ethical approval from your IRB. If your study requires IRB approval, it may be worthwhile to submit your proposal before your questionnaire is completely done. Revisions to IRB protocols are common and it takes less time to review a few changes to questions and answers than it does to review the entire study, so give them the whole study as soon as you can. Once the IRB approves your questionnaire, you cannot change it without their okay.

Key Takeaways

  • A questionnaire is comprised of self-report measures of variables in a research study.
  • Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
  • Effective survey questions and responses take careful construction by researchers, as participants may be confused or otherwise influenced by how items are phrased.
  • The questionnaire should start with informed consent and instructions, flow logically from one topic to the next, engage but not shock participants, and thank participants at the end.
  • Pilot testing can help identify any issues in a questionnaire before distributing it to participants, including language or length issues.

It’s a myth that researchers work alone! Get together with a few of your fellow students and swap questionnaires for pilot testing.

  • Use the criteria in each section above (questions, response options, questionnaires) and provide your peers with the strengths and weaknesses of their questionnaires.
  • See if you can guess their research question and hypothesis based on the questionnaire alone.

It’s a myth that researchers work alone! Get together with a few of your fellow students and compare draft questionnaires.

  • What are the strengths and limitations of your questionnaire as compared to those of your peers?
  • Is there anything you would like to use from your peers’ questionnaires in your own?
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
  • Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 27(3), 941-964. ↵
  • Peterson, R. A. (2000).  Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵
  • Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth; Dillman, D. A. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley; Neuman, W. L. (2003). Social research methods: Qualitative and quantitative approaches (5th ed.). Boston, MA: Pearson. ↵
  • Babbie, E. (2010). The practice of social research  (12th ed.). Belmont, CA: Wadsworth. ↵
  • Hopper, J. (2010). How long should a survey be? Retrieved from  http://www.verstaresearch.com/blog/how-long-should-a-survey-be ↵

According to the APA Dictionary of Psychology, an operational definition is "a description of something in terms of the operations (procedures, actions, or processes) by which it could be observed and measured. For example, the operational definition of anxiety could be in terms of a test score, withdrawal from a situation, or activation of the sympathetic nervous system. The process of creating an operational definition is known as operationalization."

Triangulation of data refers to the use of multiple types, measures or sources of data in a research project to increase the confidence that we have in our findings.

Testing out your research materials in advance on people who are not included as participants in your study.

items on a questionnaire designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample

a question that asks more than one thing at a time, making it difficult to respond accurately

When a participant answers in a way that they believe is socially the most acceptable answer.

the answers researchers provide to participants to choose from when completing a questionnaire

questions in which the researcher provides all of the response options

Questions for which the researcher does not include response options, allowing for respondents to answer the question in their own words

respondents to a survey who choose neutral response options, even if they have an opinion

respondents to a survey who choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion

An ordered outline that includes your research question, a description of the data you are going to use to answer it, and the exact analyses, step-by-step, that you plan to run to answer your research question.

A process through which the researcher explains the research process, procedures, risks and benefits to a potential participant, usually through a written document, which the participant than signs, as evidence of their agreement to participate.

a type of survey question that lists a set of questions for which the response options are all the same in a grid layout

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

Creating a Questionnaire

Create the perfect questionnaire and collect actionable data using our online guide!

Customer Survey Software

Table of Contents

  • How to Create

Questionnaire Types

  • Collecting Responses
  • Analyzing Results
  • Getting Started

What is a Questionnaire?

Definition: A questionnaire is a convenient way to collect feedback. A questionnaire can be used to measure customer satisfaction, capture employee feedback, or even conduct product research. Responses can be collected via email, web link, QR code, or using a survey panel.

The term "survey" and "questionnaire" are commonly used interchangeably. A questionnaire refers to the questions used to collect feedback (the form itself). A survey relates to the entire research process, including summarizing and analyzing questionnaire data.

Getting Started + Tips

How to make a questionnaire: Keep questions short and focused on one topic at a time. Use multiple-choice questions to fit answers into a specific category. Use an open-ended question to capture comments. A Likert scale or MaxDiff question can be used for market research. Collect responses for your questionnaire using an email collector, an anonymous link, or even a QR code.

The following 6 tips will help you create the perfect questionnaire:

1) Use 10 Questions or Less

The shorter you keep your survey, the higher your completion rates. Longer questionnaires usually tend to have a high drop-off percentage. Keeping your surveys to 10 questions or fewer forces you to draft a study that only includes important questions; you should remove trivial questions during the draft process.

2) One Idea Per Question

Make sure each question only covers one topic. Try to include only one topic at a time. For example, in an employee survey, you would not want to ask, "Do you feel satisfied with your compensation and career advancement?". Instead, you would like to separate "compensation" and "career advancement" into two questions or use a Likert scale , putting each question on a separate row.

3) Group Similar Questions Together

Suppose the survey is more than ten questions; similar questions should be grouped on separate pages. If you don't want to use more than one page, add extra spacing between groups of the question; extra white space can increase the increase the readability of your questionnaire.

4) Use Skip/Display Logic

If you have questions that only apply to certain people, consider using skip or display logic to show those questions conditionally. This will help reduce the length of your survey and boost response rates.

If you have questions that only apply to certain people, consider using skip or display logic to show those questions conditionally. This will help reduce the length of your survey and boost response rates. For example, if you asked, "Are you currently looking for new employment opportunities?". If the answer were "yes," a follow-up question would ask, "Why?"

5) Use Research Questions Like MaxDiff

Research questions are an excellent tool for customer or product questionnaires. Instead of asking multiple questions on which features are essential or what price is desirable, question types like MaxDiff and Conjoint will provide you with high-quality, actionable data that can be used for feature prioritization and product pricing. In addition, these question types will reduce the length of your questionnaire.

6) Keep the Audience in Mind

An employee questionnaire should use an anonymous link to collect responses; this will help boost trust and increase honest answers. If doing a customer study, consider adding custom data to the weblink to help identify responses. A survey panel and current customers can lend fresh perspectives for general market research.

Questionnaire Templates

Adding customer surveys to your Google review strategy will add additional data points to improve customer satisfaction. In addition, surveys are a valuable tool to identify ways to improve, establish internal benchmarks, and conduct pricing and product research to improve your company's products.

While there are numerous types of questionnaires (or survey types), these are the five most common general categories:

1) Customer Satisfaction

Capturing customer feedback is one of the most common uses of questionnaires. A good customer satisfaction survey will always revolve around a Net Promoter Score question. When the Net Promoter Score question results are tallied, one number from -100 is 100 is displayed. This number is ideal for benchmarks. Net Promoter provides quick and actionable feedback when combined with an open-ended text question.

2) Customer Effort

Measuring how easily customers can complete a purchase or take a specific action is crucial for the customer experience strategy. A customer effort score question is a rating scale from 1 to 7 (disagree to agree). Results for this question are averaged; the higher the score, the easier it is for your customers to complete tasks.

3) Employee Satisfaction & Engagement

Employee satisfaction and engagement are often used interchangeably but measure different things. Both types of surveys often use opinion scales to ask questions.

Employee satisfaction measures how satisfied employees are with their job and work environment. Standard measures of employee satisfaction include salary, benefits, and co-worker relationships.

Employee engagement relates to the emotional commitment employees have to an organization. It goes beyond simple satisfaction. Standard measures of engagement include belief in the company mission, opportunities for career growth, and being inspired to perform at a high level.

4) Employee Exit Interviews

When employees leave for new opportunities, sending a questionnaire is a great way to understand why that employee is leaving. The feedback obtained here can be used to improve the workplace and reduce employee turnover.

5) Product Research

MaxDiff is used to identify what is most important to your audience. For example, if building a new mobile application, asking a group of users what they think is least and most important will help guide product strategy; your team should only focus on the important areas.

For pricing a new product, Van Westendorp will give you a range of prices the market is willing to expect. You could price your product too high or too low without a question like this, reducing your market penetration.

Collecting Responses For Your Questionnaire

There are a few different ways to collect feedback for questionnaires. Depending on your needs, each one could have an advantage.

With email distribution, you would upload a list of email addresses, and the platform would automatically place a link to your questionnaire inside the email body. One advantage is sending email reminders to respondents who still need to complete your survey. In addition, the email links are unique for each respondent, so you can track email open and click rates. As a result, email surveys are ideal for customer research.

A web link is a convenient way to collect feedback at your convenience. You can place a web link on social media, your website, or even inside your CRM email program (instead of an email collector with a unique link to each person). Custom data can be included in the link, such as store location. This custom data can be used to segment and filter results.

Anonymous Link

When you want to protect your respondents' identities, you use an anonymous link . Anonymous inks do not store respondent information, IP address, or email address. Because of this, anonymous survey links are perfect for employee surveys.

QR code Surveys

QR code surveys can be placed on paper receipts, product packaging, or flyers. In addition, QR codes are a great way to collect feedback after or during an event or even during in-person focus groups.

Survey Panels

If you're conducting market research and need access to a customer base, using a survey panel will get you the responses required. A good survey panel will allow you to target specific demographics, job titles, or interest levels (such as car enthusiasts). When using survey panels, you'll want to double-check and clean your data for low-quality responses. People who speed through your survey or mark the first answer for all questions should be removed.

How to Analyze Questionnaire Data

When analyzing the data from a questionnaire, consider a few advanced techniques like the ones below. These techniques will give you better insights than just simple graphs and charts.

Creating a segment or a cross-tabulation is the easiest way to dive deeper into your results. For example, if you conducted an employee satisfaction survey, the overall scores for the company could be high. But that might only tell part of the story. For example, if your company has multiple departments, you should create a cross-tabulation for each department. You might notice that there is one department with low scores. or one department with high scores.

If your company conducted its first Net Promoter Score survey and the results were -10, that score would be your benchmark. Each subsequent customer survey you run should be compared against that initial number to improve it each time.

TURF Analysis

This is an advanced research technique but very valuable. TURF analysis analysis stands for "Total Unduplicated Reach and Frequency" and is used to find the combination of items that would provide the highest reach level. For example, suppose you ask, "Which of the following flavor of ice cream would you buy?" If you run a TURF analysis on the results, you could find the top 3 or 4 combinations of flavors that would result in the highest sales.

Unsure Where to Start?

Creating a questionnaire can be a challenging process. However, these three suggestions can help you with the perfect questionnaire strategy.

1) Talk With Your Team

Some departments might want to conduct pricing research and do simple Net Promoter Score surveys. Having your organization aligned on strategy will simplify the process and eliminate any possibility of re-work. An aligned strategy will also mean a shorter study with fewer overlapping questions.

2) Start with a Template

A pre-made template will show you how to format and word questions. Next, try multiple templates to understand the various question types.

3) Look at Competitor Surveys

You might notice competitors asking specific questions - this would be a sign that those questions provide valuable metrics. If you can incorporate the great things your competition does while making it more efficient for respondents, your questionnaire campaigns will have a greater chance of success.

Get Started Now

We have you covered on anything from customer surveys, employee surveys, to market research. Get started and create your first survey for free.

how to create a survey for research

How to Write a Research Proposal: (with Examples & Templates)

how to write a research proposal

Table of Contents

Before conducting a study, a research proposal should be created that outlines researchers’ plans and methodology and is submitted to the concerned evaluating organization or person. Creating a research proposal is an important step to ensure that researchers are on track and are moving forward as intended. A research proposal can be defined as a detailed plan or blueprint for the proposed research that you intend to undertake. It provides readers with a snapshot of your project by describing what you will investigate, why it is needed, and how you will conduct the research.  

Your research proposal should aim to explain to the readers why your research is relevant and original, that you understand the context and current scenario in the field, have the appropriate resources to conduct the research, and that the research is feasible given the usual constraints.  

This article will describe in detail the purpose and typical structure of a research proposal , along with examples and templates to help you ace this step in your research journey.  

What is a Research Proposal ?  

A research proposal¹ ,²  can be defined as a formal report that describes your proposed research, its objectives, methodology, implications, and other important details. Research proposals are the framework of your research and are used to obtain approvals or grants to conduct the study from various committees or organizations. Consequently, research proposals should convince readers of your study’s credibility, accuracy, achievability, practicality, and reproducibility.   

With research proposals , researchers usually aim to persuade the readers, funding agencies, educational institutions, and supervisors to approve the proposal. To achieve this, the report should be well structured with the objectives written in clear, understandable language devoid of jargon. A well-organized research proposal conveys to the readers or evaluators that the writer has thought out the research plan meticulously and has the resources to ensure timely completion.  

Purpose of Research Proposals  

A research proposal is a sales pitch and therefore should be detailed enough to convince your readers, who could be supervisors, ethics committees, universities, etc., that what you’re proposing has merit and is feasible . Research proposals can help students discuss their dissertation with their faculty or fulfill course requirements and also help researchers obtain funding. A well-structured proposal instills confidence among readers about your ability to conduct and complete the study as proposed.  

Research proposals can be written for several reasons:³  

  • To describe the importance of research in the specific topic  
  • Address any potential challenges you may encounter  
  • Showcase knowledge in the field and your ability to conduct a study  
  • Apply for a role at a research institute  
  • Convince a research supervisor or university that your research can satisfy the requirements of a degree program  
  • Highlight the importance of your research to organizations that may sponsor your project  
  • Identify implications of your project and how it can benefit the audience  

What Goes in a Research Proposal?    

Research proposals should aim to answer the three basic questions—what, why, and how.  

The What question should be answered by describing the specific subject being researched. It should typically include the objectives, the cohort details, and the location or setting.  

The Why question should be answered by describing the existing scenario of the subject, listing unanswered questions, identifying gaps in the existing research, and describing how your study can address these gaps, along with the implications and significance.  

The How question should be answered by describing the proposed research methodology, data analysis tools expected to be used, and other details to describe your proposed methodology.   

Research Proposal Example  

Here is a research proposal sample template (with examples) from the University of Rochester Medical Center. 4 The sections in all research proposals are essentially the same although different terminology and other specific sections may be used depending on the subject.  

Research Proposal Template

Structure of a Research Proposal  

If you want to know how to make a research proposal impactful, include the following components:¹  

1. Introduction  

This section provides a background of the study, including the research topic, what is already known about it and the gaps, and the significance of the proposed research.  

2. Literature review  

This section contains descriptions of all the previous relevant studies pertaining to the research topic. Every study cited should be described in a few sentences, starting with the general studies to the more specific ones. This section builds on the understanding gained by readers in the Introduction section and supports it by citing relevant prior literature, indicating to readers that you have thoroughly researched your subject.  

3. Objectives  

Once the background and gaps in the research topic have been established, authors must now state the aims of the research clearly. Hypotheses should be mentioned here. This section further helps readers understand what your study’s specific goals are.  

4. Research design and methodology  

Here, authors should clearly describe the methods they intend to use to achieve their proposed objectives. Important components of this section include the population and sample size, data collection and analysis methods and duration, statistical analysis software, measures to avoid bias (randomization, blinding), etc.  

5. Ethical considerations  

This refers to the protection of participants’ rights, such as the right to privacy, right to confidentiality, etc. Researchers need to obtain informed consent and institutional review approval by the required authorities and mention this clearly for transparency.  

6. Budget/funding  

Researchers should prepare their budget and include all expected expenditures. An additional allowance for contingencies such as delays should also be factored in.  

7. Appendices  

This section typically includes information that supports the research proposal and may include informed consent forms, questionnaires, participant information, measurement tools, etc.  

8. Citations  

how to create a survey for research

Important Tips for Writing a Research Proposal  

Writing a research proposal begins much before the actual task of writing. Planning the research proposal structure and content is an important stage, which if done efficiently, can help you seamlessly transition into the writing stage. 3,5  

The Planning Stage  

  • Manage your time efficiently. Plan to have the draft version ready at least two weeks before your deadline and the final version at least two to three days before the deadline.
  • What is the primary objective of your research?  
  • Will your research address any existing gap?  
  • What is the impact of your proposed research?  
  • Do people outside your field find your research applicable in other areas?  
  • If your research is unsuccessful, would there still be other useful research outcomes?  

  The Writing Stage  

  • Create an outline with main section headings that are typically used.  
  • Focus only on writing and getting your points across without worrying about the format of the research proposal , grammar, punctuation, etc. These can be fixed during the subsequent passes. Add details to each section heading you created in the beginning.   
  • Ensure your sentences are concise and use plain language. A research proposal usually contains about 2,000 to 4,000 words or four to seven pages.  
  • Don’t use too many technical terms and abbreviations assuming that the readers would know them. Define the abbreviations and technical terms.  
  • Ensure that the entire content is readable. Avoid using long paragraphs because they affect the continuity in reading. Break them into shorter paragraphs and introduce some white space for readability.  
  • Focus on only the major research issues and cite sources accordingly. Don’t include generic information or their sources in the literature review.  
  • Proofread your final document to ensure there are no grammatical errors so readers can enjoy a seamless, uninterrupted read.  
  • Use academic, scholarly language because it brings formality into a document.  
  • Ensure that your title is created using the keywords in the document and is neither too long and specific nor too short and general.  
  • Cite all sources appropriately to avoid plagiarism.  
  • Make sure that you follow guidelines, if provided. This includes rules as simple as using a specific font or a hyphen or en dash between numerical ranges.  
  • Ensure that you’ve answered all questions requested by the evaluating authority.  

Key Takeaways   

Here’s a summary of the main points about research proposals discussed in the previous sections:  

  • A research proposal is a document that outlines the details of a proposed study and is created by researchers to submit to evaluators who could be research institutions, universities, faculty, etc.  
  • Research proposals are usually about 2,000-4,000 words long, but this depends on the evaluating authority’s guidelines.  
  • A good research proposal ensures that you’ve done your background research and assessed the feasibility of the research.  
  • Research proposals have the following main sections—introduction, literature review, objectives, methodology, ethical considerations, and budget.  

how to create a survey for research

Frequently Asked Questions  

Q1. How is a research proposal evaluated?  

A1. In general, most evaluators, including universities, broadly use the following criteria to evaluate research proposals . 6  

  • Significance —Does the research address any important subject or issue, which may or may not be specific to the evaluator or university?  
  • Content and design —Is the proposed methodology appropriate to answer the research question? Are the objectives clear and well aligned with the proposed methodology?  
  • Sample size and selection —Is the target population or cohort size clearly mentioned? Is the sampling process used to select participants randomized, appropriate, and free of bias?  
  • Timing —Are the proposed data collection dates mentioned clearly? Is the project feasible given the specified resources and timeline?  
  • Data management and dissemination —Who will have access to the data? What is the plan for data analysis?  

Q2. What is the difference between the Introduction and Literature Review sections in a research proposal ?  

A2. The Introduction or Background section in a research proposal sets the context of the study by describing the current scenario of the subject and identifying the gaps and need for the research. A Literature Review, on the other hand, provides references to all prior relevant literature to help corroborate the gaps identified and the research need.  

Q3. How long should a research proposal be?  

A3. Research proposal lengths vary with the evaluating authority like universities or committees and also the subject. Here’s a table that lists the typical research proposal lengths for a few universities.  

     
  Arts programs  1,000-1,500 
University of Birmingham  Law School programs  2,500 
  PhD  2,500 
    2,000 
  Research degrees  2,000-3,500 

Q4. What are the common mistakes to avoid in a research proposal ?  

A4. Here are a few common mistakes that you must avoid while writing a research proposal . 7  

  • No clear objectives: Objectives should be clear, specific, and measurable for the easy understanding among readers.  
  • Incomplete or unconvincing background research: Background research usually includes a review of the current scenario of the particular industry and also a review of the previous literature on the subject. This helps readers understand your reasons for undertaking this research because you identified gaps in the existing research.  
  • Overlooking project feasibility: The project scope and estimates should be realistic considering the resources and time available.   
  • Neglecting the impact and significance of the study: In a research proposal , readers and evaluators look for the implications or significance of your research and how it contributes to the existing research. This information should always be included.  
  • Unstructured format of a research proposal : A well-structured document gives confidence to evaluators that you have read the guidelines carefully and are well organized in your approach, consequently affirming that you will be able to undertake the research as mentioned in your proposal.  
  • Ineffective writing style: The language used should be formal and grammatically correct. If required, editors could be consulted, including AI-based tools such as Paperpal , to refine the research proposal structure and language.  

Thus, a research proposal is an essential document that can help you promote your research and secure funds and grants for conducting your research. Consequently, it should be well written in clear language and include all essential details to convince the evaluators of your ability to conduct the research as proposed.  

This article has described all the important components of a research proposal and has also provided tips to improve your writing style. We hope all these tips will help you write a well-structured research proposal to ensure receipt of grants or any other purpose.  

References  

  • Sudheesh K, Duggappa DR, Nethra SS. How to write a research proposal? Indian J Anaesth. 2016;60(9):631-634. Accessed July 15, 2024. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5037942/  
  • Writing research proposals. Harvard College Office of Undergraduate Research and Fellowships. Harvard University. Accessed July 14, 2024. https://uraf.harvard.edu/apply-opportunities/app-components/essays/research-proposals  
  • What is a research proposal? Plus how to write one. Indeed website. Accessed July 17, 2024. https://www.indeed.com/career-advice/career-development/research-proposal  
  • Research proposal template. University of Rochester Medical Center. Accessed July 16, 2024. https://www.urmc.rochester.edu/MediaLibraries/URMCMedia/pediatrics/research/documents/Research-proposal-Template.pdf  
  • Tips for successful proposal writing. Johns Hopkins University. Accessed July 17, 2024. https://research.jhu.edu/wp-content/uploads/2018/09/Tips-for-Successful-Proposal-Writing.pdf  
  • Formal review of research proposals. Cornell University. Accessed July 18, 2024. https://irp.dpb.cornell.edu/surveys/survey-assessment-review-group/research-proposals  
  • 7 Mistakes you must avoid in your research proposal. Aveksana (via LinkedIn). Accessed July 17, 2024. https://www.linkedin.com/pulse/7-mistakes-you-must-avoid-your-research-proposal-aveksana-cmtwf/  

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • How to Paraphrase Research Papers Effectively
  • How to Cite Social Media Sources in Academic Writing? 
  • What is the Importance of a Concept Paper and How to Write It 

APA format: Basic Guide for Researchers

You may also like, how to choose a dissertation topic, how to write a phd research proposal, how to write an academic paragraph (step-by-step guide), five things authors need to know when using..., 7 best referencing tools and citation management software..., maintaining academic integrity with paperpal’s generative ai writing..., research funding basics: what should a grant proposal..., how to write an abstract in research papers..., how to write dissertation acknowledgements.

how to create a survey for research

  • Small Law Firms
  • Legal Technology
  • Legal Tech Non-Event
  • Job Listings
  • Newsletters
  • Law Schools
  • The Legal Tech Non-Event
  • Law Firm Transparency Directory
  • Law Firm Rankings
  • Law School Rankings
  • Resource Library
  • Practice Management
  • CRM for Law Firms
  • Legal Operations
  • Clio App Integrations
  • Practice Mgmt. by Practice Area
  • Document Management
  • Time, Billing & Payments
  • Law Firm KPIs
  • Cybersecurity
  • Spend Management Software
  • Legal AI Software
  • Legal Tech Directory
  • Appellate Court Blog

Above The Law In your inbox

Subscribe and get breaking news, commentary, and opinions on law firms, lawyers, law schools, lawsuits, judges, and more.

Attention Buyer: Not All Legal AI Models Are Created Equal

Legal gen ai – uncover the best solution for your firm..

Technology connection digital data and big data concept.

Image courtesy of LexisNexis.

A 2024 LexisNexis survey of managing partners and C-suite leaders at major law firms and Fortune 1000 companies found that nearly all legal executives (90%) expect their investment in Generative Artificial Intelligence (Gen AI) technologies to increase over the next five years. The survey also found that roughly one-half (53%) of Law360® Leaderboard Pulse firms have already purchased Gen AI tools including both general purpose and legal practice specific AI solutions.

In addition to the variety of general-purpose AI tools available to businesses and consumers, the past year has also seen the emergence of commercially available Legal AI tools — Gen AI tools tailored specifically for the legal profession. Many legal professionals are trying to figure out how they can take advantage of the opportunities afforded by this new technology while minimizing the risks that are well-documented in the industry.

We recently published “ The Definitive Guide to Choosing a Gen AI Legal Research Solution ,” a free buyer’s guide that details what law firms and in-house legal departments should look for so they can select the most appropriate solution for their organization. Over the course of the next several weeks, we will be publishing a series of blog posts that are drawn from the five pillars addressed in the buyer’s guide.

We kick off the series today by unpacking the importance of the Gen AI model itself. Before proceeding on the journey to acquire a Legal AI solution, it’s important to understand that not all Gen AI models are created equal.

Strengths and Weaknesses of LLMs

There is no need to become an expert in software engineering in order to be a savvy buyer of a Legal AI solution, but it does help to learn a bit about Large Language Models (LLMs) — a type of AI that are trained on vast amounts of data to mimic human intelligence — and how they operate.

The fact is that different LLMs have their own strengths and weaknesses. Here are a few key differences between some of the LLMs used to power various Legal AI solutions you will find on the market:

  • Architecture — Different underlying “neural network architectures” will impact capabilities, which is why some LLMs are better at tasks such as translation or summarization.
  • Size — LLMs can range from millions to trillions of parameters, so larger models are generally more capable while smaller models can be more efficient.
  • Training Data — Models trained on legal data will have different strengths than those trained on general purpose text, an important consideration for law firms and in-house legal teams.
  • Fine-Tuning — LLMs can be further refined on niche datasets to improve their capabilities in specific domains.

Public vs. Proprietary — Open source LLMs allow for greater transparency, while proprietary models offer a deeper understanding of the user’s intent and can therefore deliver higher quality responses.

The Multi-Model Approach

There is another strategy available to Gen AI development teams that allows users to benefit from the unique capabilities of each LLM while balancing out each one’s weaknesses: A multi-model approach that draws from more than one LLM in the creation of a new tool, enabling it to produce results that surpass that of any one model.

Questions to Ask About the Gen AI Model

When evaluating a Legal AI solution, here are some questions to pose to the provider about their Gen AI model:

  • Do you use a single model or a multi-model approach for creating your product?
  • What is the average time the AI takes to return an answer?
  • Are there any limitations on the number of prompts you can pose each day?
  • What is the underlying architecture of the AI model and how does that design impact its capabilities to perform legal-specific tasks?
  • Was the Gen AI model trained on legal-specific data or open-source data?
  • Does your AI solution incorporate a retrieval-augmented generation framework to find and link relevant source documents?

Lexis+ AI is our breakthrough Gen AI platform that we believe will transform legal work by providing a suite of legal research, drafting, and summarization tools that delivers on the potential of Gen AI technology. Lexis+ AI pairs our unsurpassed legal content with breakthrough Gen AI technology in a way that could redefine the way that legal research is conducted and legal work product is created. Its answers are grounded in the world’s largest repository of accurate and exclusive legal content from LexisNexis with industry-leading data security and attention to privacy. Click here to request a free trial.

To download a copy of The Definitive Guide to Choosing a Gen AI Legal Research Solution, please click here .

Legal Research , Legal Tech , LexisNexis / Lexis-Nexis , LexisNexis Innovation Blitz , Sponsored Content , Technology

More From Above the Law

How Should Law Students Spend The Remainder Of Their Summer Preparing For Law School?

How Should Law Students Spend The Remainder Of Their Summer Preparing For Law School?

Take Our In-House Counsel Compensation Survey

Take Our In-House Counsel Compensation Survey

Jones Day Tries To Be ‘Hip’ With Kamala Meme In Its Supreme Court Filing… For More Pollution

Jones Day Tries To Be 'Hip' With Kamala Meme In Its Supreme Court Filing... For More Pollution

Would You Take The Bar If They Paid You Like A 4th Year Associate?

Would You Take The Bar If They Paid You Like A 4th Year Associate?

From the above the law network.

  • A Law Firm Checklist For Successful Client Portals Source: Thomson Reuters
  • A Law Firm Checklist For Successful Transaction Management Source: Thomson Reuters
  • Document Automation For Law Firms: The Definitive Guide Source: THOMSON REUTERS
  • The Secrets Of Small Firm Success Source: Above The Law
  • 5 Things To Consider Before Hiring A Legal Marketing Partner Source: THOMSON REUTERS
  • How You Can Use Tech To Strengthen Client Ties Source: Thomson Reuters And Above The Law
  • Differentiating Your Solo Firm In A Crowded Marketplace Source: Thomson Reuters & Above the Law
  • This Is Why You Don’t Take Law Advice From Twitter — See Also Source: Associate Center

Love ATL? Let's make it official. Sign up for our newsletter.

Friend's Email Address

Your Email Address

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

The Experiences of U.S. Adults Who Don’t Have Children

57% of adults under 50 who say they’re unlikely to ever have kids say a major reason is they just don’t want to; 31% of those ages 50 and older without kids cite this as a reason they never had them, table of contents.

  • Reasons for not having children
  • The impact of not having children
  • How the survey findings do – or don’t – differ by gender
  • Views on wanting children
  • Reasons adults ages 50 and older didn’t have children
  • Reasons adults under 50 are unlikely to have children
  • General impact of not having children
  • Personal impact of not having children
  • Experiences in the workplace
  • Worries about the future
  • Pros and cons of not having children, according to younger adults who say they’re unlikely to have kids
  • The impact of not having children on relationships
  • Pressure to have children
  • Relationships with nieces and nephews
  • Providing care for aging parents
  • How often younger adults talk about having children
  • Friends and children
  • The impact of not having children on dating
  • Educational attainment
  • Marital status and living arrangements
  • Employment, wages and wealth
  • Acknowledgments
  • The American Trends Panel survey methodology
  • Secondary data methodology

Pew Research Center conducted this study to better understand the experiences of two groups of U.S. adults who don’t have children: those ages 50 and older, and those younger than 50 who say they are unlikely to ever have children. It explores their reasons for not having children or being unlikely to do so, the perceived pros and cons of not having children, and the impact of not having children on their relationships.

Most of the analysis in this report is based on a survey of 2,542 adults ages 50 and older who have never had children and 770 adults ages 18 to 49 who don’t have children and say they are not too or not at all likely to have them. The survey was conducted April 29 to May 19, 2024. Most of the respondents who took part are members of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This survey also included an oversample of adults ages 50 and older who have never had children from Ipsos’ KnowledgePanel, another probability-based online survey web panel recruited primarily through national, random sampling of residential addresses.

Address-based sampling ensures that nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology .

The report also includes an analysis comparing the demographic characteristics and economic outcomes of adults ages 50 and older who do not have children with those of parents in the same age group. The data for this analysis comes from the U.S. Census Bureau’s 2021 and 2022 Surveys of Income and Program Participation (SIPP).

Here are the questions we asked adults ages 50 and older who don’t have children and adults younger than 50 who don’t have children and say they’re unlikely to have them, along with responses, and the survey’s methodology .

In this report, we do not use the terms “childless” or “child-free” to refer to adults who don’t have children. The Associated Press Stylebook , a resource we use often, recommends against using these terms. 

In the survey findings featured in Chapters 1-3, references to adults who do not have children include those who indicated they have never been a parent or guardian to any children, living or deceased, including biological or adopted children.

In the analysis of government data in Chapter 4, references to those who do and do not have children include those who have or have not had biological children.

References to college graduates or people with a college degree comprise those with a bachelor’s degree or more education. “Some college” includes those with an associate degree and those who attended college but did not obtain a degree.

Chart shows Growing share of adults under 50 say they’re unlikely to ever have kids

The U.S. fertility rate reached a historic low in 2023 , with a growing share of women ages 25 to 44 having never given birth .

And the share of U.S. adults younger than 50 without children who say they are unlikely to ever have kids rose 10 percentage points between 2018 and 2023 (from 37% to 47%), according to a Pew Research Center survey .

In this report, we explore the experiences of two groups of U.S. adults :

  • Those ages 50 and older who don’t have children
  • Those younger than 50 who don’t have children and say they are unlikely to in the future

About four-in-ten of those in the older group (38%) say there was a time when they wanted to have children. A smaller but sizable share (32%) say they never wanted children, and 25% say they weren’t sure one way or the other. Few say they frequently felt pressure to have children from family, friends or society in general.

Reasons for not having children – or being unlikely to ever have them – differ between the older and younger groups. The top response for those ages 50 and older is that it just didn’t happen. Meanwhile, those in the younger group are most likely to say they just don’t want to have kids. Women younger than 50 are especially likely to say they just don’t want to have children (64% vs. 50% of men in this group).

Majorities in both groups say not having kids has made it easier for them to afford the things they want, have time for hobbies and interests, and save for the future. In the younger group, about six-in-ten also say not having kids has made it easier for them to be successful in their job or career and to have an active social life.

Still, majorities in both groups say parents have it easier when it comes to having someone to care for them as they age. Large shares in both groups say having a fulfilling life doesn’t have much to do with whether someone does or doesn’t have children. 

These are among the key findings from a new Pew Research Center survey of 2,542 adults ages 50 and older who don’t have children and 770 adults ages 18 to 49 who don’t have children and say they are not too or not at all likely to have them. The survey was conducted April 29 to May 19, 2024.

Jump to read more about:

  • Reasons adults give for not having children
  • Perceived pros and cons of not having children
  • Relationships and caregiving among adults without children
  • Demographic and economic characteristics of adults 50 and older without children

The study explores reasons U.S. adults give for not having children, among those ages 50 and older who haven’t had kids and those under 50 who say they’re unlikely to ever become parents.

Chart shows Younger and older adults’ reasons for not having children differ widely

By margins of at least 10 points, those in the younger group are more likely than those ages 50 and older to say each of the following is a major reason:

  • They just don’t want to have children (57% in the younger group vs. 31% in the older group)
  • They want to focus on other things, such as their career or interests (44% vs. 21%)
  • Concerns about the state of the world, other than the environment (38% vs. 13%)
  • They can’t afford to raise a child (36% vs. 12%)
  • Concerns about the environment, including climate change (26% vs. 6%)
  • They don’t really like children (20% vs. 8%)

In turn, a larger share of those in the older group say a major reason they didn’t have kids is that they didn’t find the right partner (33% vs. 24% of those in the younger group).

There are no significant differences between the two groups in the shares pointing to infertility or other medical reasons (their own or their spouse’s or partner’s) or to a spouse or partner who didn’t want to have children as major reasons.

Among those in their 40s, 22% say infertility or other medical reasons are a major factor in why they’re unlikely to ever have children. About one-in-ten of those ages 18 to 39 (9%) say the same.

Majorities of adults ages 50 and older who don’t have kids and those under 50 who say they’re unlikely to do so see some benefits to not having children.

Chart shows Among adults under 50 who say they’re unlikely to have children, large majorities see financial and lifestyle advantages to not being parents

But by margins ranging from 17 to 23 points, those in the younger group are more likely than those ages 50 and older to say each of the following has been easier for them because they don’t have children:

  • Having time for hobbies and interests (80% in the younger group vs. 57% in the older group)
  • Affording the things they want (79% vs. 61%)
  • Saving for the future (75% vs. 57%)
  • Being successful in their job or career (61% vs. 44%, among those who don’t indicate this doesn’t apply to them)
  • Having an active social life (58 vs. 36%)

The impact at work

We also asked those who are employed about the impact not having children has had on their work lives.

Experiences are mixed. For example, 45% of those in the younger group and 35% of those in the older group say they’ve had more opportunities to network outside of work hours because they don’t have kids. At the same time, about a third in each group say they’ve been expected to take on extra work or responsibilities, and many also say they’ve been given less flexibility than those who have children.

Chart shows About 1 in 4 adults 50 and older without children say they frequently worry about who will care for them as they age

The survey also asked adults ages 50 and older without children about certain concerns they may have as they age .

About one-in-five or more say they worry extremely or very often about:

  • Having enough money (35%)
  • Having someone who will provide care for them (26%)
  • Being lonely (19%)

A smaller share (11%) say they frequently worry about having someone who will carry on their values and traditions when they’re gone.

In a separate survey , 46% of parents ages 50 and older said they frequently worry about having enough money as they age. Smaller shares said the same about having someone who will provide care for them as they age (20%), having someone who will carry on their values and traditions (17%) and being lonely as they age (15%).

For the most part, the experiences of adults without children and the reasons they give for not having them don’t vary much by gender. This is the case across both age groups.

Still, there are some questions on which men and women without kids differ considerably.

Among those ages 50 and older, women are more likely than men to say:

  • Being successful in their job or career has been easier because they don’t have children (50% among women vs. 39% among men).
  • They felt pressure to have children from society in general at least sometimes when they were younger (42% vs. 27%).

Chart shows Most women under 50 who don’t have kids say a major reason they’re unlikely to have them is they just don’t want to

Among those ages 18 to 49, women are more likely than men to say each of the following is a major reason they’re unlikely to have children:

  • They just don’t want to (64% vs. 50%)
  • Negative experiences with their own families growing up (22% vs. 13%)

Women in the younger group are also more likely than their male counterparts to say the topic of whether they’ll have children comes up in conversation with their friends at least sometimes (41% vs. 26%).

Demographic and economic differences between adults 50 and older with and without children

In addition to the survey findings, this report includes an analysis of government data to show how the demographic characteristics and economic outcomes of adults ages 50 and older who don’t have children differ from those ages 50 and older who are parents.

Among adults in this age group, those who don’t have children are less likely to have ever been married. They are more likely to have a bachelor’s degree or more education. This difference in educational attainment is especially pronounced among women.

Older women who don’t have children have higher median monthly wages than mothers. The opposite is true among older men; those without children tend to earn less than fathers.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

Cultural Issues and the 2024 Election

Americans overwhelmingly say access to ivf is a good thing, few east asian adults believe women have an obligation to society to have children, among parents with young adult children, some dads feel less connected to their kids than moms do, how teens and parents approach screen time, most popular, report materials.

  • Topline ages 18-49
  • Topline ages 50+

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Welcome to Finextra. We use cookies to help us to deliver our services. We'll assume you're ok with this, but you may change your preferences at our Cookie Centre . Please read our Privacy Policy .

/ retail banking

See headlines ».

how to create a survey for research

News in your inbox

For Finextra's free daily newsletter, breaking news and flashes and weekly job board.

Related Companies

Lead channel.

Santander bids to create TikTok vibe with Gen Z

Santander bids to create TikTok vibe with Gen Z

Santander is looking to burnish its credenttials with Gen Z, embarking on a search for talent on TikTok to create an online-native international music band.

The bank has launched an international contest 'Louder Together' to find five talented musicians from Argentina, Brazil, Chile, Spain, Mexico, Portugal and Uruguay. The five winners will be awarded a trip to Barcelona to record a song and a video clip with a professional team. The Spanish artist Omar Montes, who has 1.6 million followers on TikTok, will be the global ambassador of the initiative, which will have the support of other important artists in each country. All those who want to participate will simply have to upload to their personal profile on that social network a video interpreting their own and original song and tag it with the hashtag #loudertogether, mentioning the account @santandersmusic in the case of Spain. A survey of 1000 18 to 14 year olds for the UK's Current Account Switch Service last year shows that 58% follow TikTok influencers who talk about budgeting, money, or personal finance. Of these, many trust what they hear, with 40% saying that these influencers give better advice than traditional media, 34% better than their friends and 26% better than their financial provider. Nearly half of respondents say that TikTok influencers have helped them make a financial decision, including investing in stocks and shares, Isas, or helping them choose a mortgage.

Sponsored: [On-Demand Webinar] Creating a Seamless Banking App Experience

Comments: (0)

Write a blog post about this story (membership required)

[On-Demand Webinar] How AI is re-shaping financial services

/regulation

Revolut finally wins uk banking licence.

25 Jul 0 2 9

Lloyds and Nationwide invest in Scottish AI fintech Aveni

Lloyds and Nationwide invest in Scottish AI fintech Aveni

29 Jul 1 1 1

JPMorgan rolls out Generative AI 'research analyst' to staff

JPMorgan rolls out Generative AI 'research analyst' to staff

29 Jul 0 1 4

Major UK banks broke CMA rules

Major UK banks broke CMA rules

25 Jul 2 3 4

Related News

American Express launches TikTok accelerator for small businesses

American Express launches TikTok accelerator for small businesses

16 November 2022 0 3 4

Gen Z turns to TikTok for financial advice

Gen Z turns to TikTok for financial advice

15 November 2022 1 10 7

NatWest launches TikTok Business Builder campaign

30 Jun 2022

TikTok owner backs away from fintech

02 Sep 2021

JPMorgan rolls out Generative AI 'research analyst' to staff

Uk neobanks set to outpace incumbents in mobile arms race, see all reports ».

Payments Modernisation: The Big Survey 2024

Payments Modernisation: The Big Survey 2024

816 downloads

The Future of Digital Banking in Europe 2024

The Future of Digital Banking in Europe 2024

889 downloads

Fraud and AML Case Management: How to Operate at the Speed of Risk

Fraud and AML Case Management: How to Operate at the Speed of Risk

379 downloads

Editor’s Note: This story is part of Systems Error , a series by CNN As Equals , investigating how your gender shapes your life online. For information about how CNN As Equals is funded and more, check out our FAQs .

Hundreds of young women and girls around the world have said they want much better support to stay safe online, sharing that they regularly face dangers and many have no one informed or powerful enough to turn to for help.

CNN As Equals and NGO Plan International surveyed more than 600 young women and girls aged 13-24 across nine countries worldwide and found that most (75%) have faced harmful content online at some point, with more than one in 10 experiencing it daily or almost daily.

Almost half, with some as young as 13, reported seeing or receiving unwanted sexual images or videos, and a quarter said they had experienced discrimination or hate speech online when sharing the threats they face.

The platforms participants said they experienced these threats on most frequently are Facebook, followed by WhatsApp, then Instagram and TikTok, which are also among the most actively used social platforms worldwide.

The surveys were conducted in Bolivia, Brazil, Burkina Faso, Colombia, Kenya, Malawi, Nepal, Philippines and Timor-Leste. They are not representative of all girls and young women growing up in those countries, but the results highlight the voices and daily experiences that many of these young women face.

When sharing the impact of these dangers, over a third who answered said they were left feeling sad, depressed, stressed or anxious, and the majority of the young women and girls felt they themselves were most responsible for their safety online – often going offline and making their accounts private to cope.

The result is a generation of resilient but resentful young people who feel they should not be solely responsible for their safety. They want better support and resources from governments, authorities, tech companies and their families.

Scroll down to explore their answers.

A further 73 young women and girls were interviewed about the online harassment they experience and asked about solutions in focus groups divided by age range and led by Plan’s country teams in the Philippines, Malawi and Brazil.

Here, many explained they feel that parents and schools are too uninformed to help, reports to platforms are sent to bots and go unanswered, and authorities don’t hold perpetrators adequately accountable.

They want things to change. Here’s how.

Among those who answered questions on solutions that could help ensure their safety, about six in 10 (61%) called for education and awareness programs on digital safety, for example through school and university curricula, to provide this literacy.

But experts warn the burden should not be entirely on girls to protect themselves.

Placing the responsibility on young women and girls is “inherently unfair,” said Hera Hussain, founder and CEO of Chayn, a UK-based tech NGO addressing gender-based abuse globally. “If you are receiving harassing messages, dick pics, and you have to go on reporting each one of them, and then blocking people, that’s so much administrative burden that you as victim and survivor have to take on.”

What girls want for a safer future online

Strict enforcement by platforms and the need for stronger legal measures were selected by over a third of those surveyed about what’s missing to ensure safety, while around a quarter felt that enhanced privacy settings and safe spaces were needed. One in five said there is a need for more accessible and reliable reporting mechanisms or stricter age verification processes.

In focus groups, some participants said they felt isolated, calling for helplines and local support services, and reiterating a need for digital safe spaces. Systemic changes were also called for, such as greater repercussions for people who abuse others, better moderation, improved identity and age checks on social media platforms and the option to report harassment or other harmful content to trained staff instead of bots.

"We don’t know if these people are really listening,” said Lea, a participant in the 17-20-year-old focus group in the Philippines, speaking of the lack of action or response by the tech sector or authorities. Participants were given pseudonyms for anonymity.

Digital resilience training delivered by tech companies themselves was also suggested.

“Those who provide them [training to stay safe online] should be the companies that make the apps… They are responsible for what can happen to us or what we can encounter,” said Reyna, aparticipant 21-24 year-old group, also in the Philippines.

Of the three regions, Africa stood out, with 40% of girls surveyed in Africa reporting feeling unsafe (40%).

‘I was young and scared’: Why girls feel unsafe online

The push for online safety has become a new frontier in the digital era, with international calls growing from civil society organizations, nonprofits and politicians around the world as more children come online.

European Union and UK legislation offers protections for children, though experts and gender equality campaigners argue these laws fall short in addressing gender-based violence and continue to place the burden of responsibility on users.

In the US, Arkansas and Utah were among the first to sign bills focusing on children’s online safety in 2023 and dozens of states have proposed or enacted legislation to regulate social media platforms in recent years. New York passed a children’s act against addictive social media feeds in June, and a Senate bill for children’s online safety is currently in the works.

Among some of the countries where surveys were conducted, the Philippines has legislation specifically targeting the country’s high levels of online child exploitation , and lawmakers in Nepal and Brazil are working on regulations for young people’s digital protection.

Authorities in Malawi have appointed a child’s protection ambassador and help provide train for school leaders, children’s NGOs and other civil society stakeholders on digital safety as part of a strategy for protecting children online.

Despite this growing body of legislation and policy, the new findings by CNN and Plan International show continued gendered abuse on a global scale.

“It has not changed, it is even worse,” said Sheila Estabillo, SAFE Online Project Manager for Plan International Philippines, who hosts online safety sessions for girls in the country.

Research shows online danger is now so commonplace it has become normalized for girls, who face unequal – and typically more sexualized – types of threats compared to boys.

Young women and girls surveyed told CNN and Plan International their most common experience of harmful content was the receipt of unwanted sexual imagery (known as cyberflashing), videos, or messaging.

“[Some people] harass people on social media and they think it’s okay to send something like that without the other’s consent,” said Reyna, in her early-20s, from the Philippines.

About half of the young women and girls surveyed in Africa (55%) reported seeing or receiving unwanted sexual images.

“I started chilling with a guy and he sent me a naked picture and asked me to send my picture too,” said Maureen, a 21-24-year-old in Malawi. The boy threatened to share her profile photo, which he edited to be nude, she said. “I was young and scared, so I was afraid to tell anyone.”

Objectification and sexualization are well-worn experiences for women and girls online, and abuse through cyberflashing and the nonconsensual release of photos, forged images and deepfakes, is becoming more common, said Hussain. “[Online abuse is] completely embedded in all aspects of your life.”

The young women and girls surveyed by CNN and Plan International reported braving other digital dangers on a regular basis, including coming across dubious money-making schemes (43%), targeted hate messages (42%), ways to self-harm (29%), and ways to be very thin through eating disorders (28%). In the Philippines, for example, 47% of participants reported seeing discussions of ways to harm yourself and 45% had seen content about ways to take your own life.

Online money scams have proliferated globally, boosted by financial technology and advancements in artificial intelligence, according to the international crime and policing body INTERPOL. Among participants in the surveys, young women and girls in Africa were most affected by money scams – with half being exposed to scams and a quarter having lost money.

“I just blocked the number and then deleted it,” she says. “I think they should make strict rules that when a person sends you something which is not what you want, they should just block that person and he or she should not use the platform again.”

She also wants platforms to do more to protect girls like her.

“They should have a very strict set of rules that you can only see what you want to see,” she said.

Left hurting ‘psychologically and personally’

The impact on young women and girls surveyed by CNN and Plan International was stark: Among survey participants who shared how seeing harmful content affected them, more than one in three reported feeling sad or depressed, stressed, worried or anxious (35%), and many said they were subsequently more careful online (40%).

Consequences also included reduced confidence and feeling of self-worth, lost sleep, and impact on relationships with loved ones, and around a quarter said they lost trust in online platforms or felt physically unsafe.

Despite sharing content about studying, Daniela said hateful comments about her appearance littered her posts. So, around four years ago, in the throes of the Covid-19 pandemic, she shut her blog down.

“I was suffering with anxiety,” the 24-year-old said. “I felt like I was looking at others and what they are doing and not really living my life. I wanted to stop and start living my life.”

She now controls her online world by keeping her accounts private and not sharing much. “I’ve become ‘low profile,’” she said.

In the focus groups, many related to Daniela’s experience, and shared their frustration that coming offline or turning accounts private to improve their mental health also comes at a cost to them.

Silencing young ‘overwhelmed’ girls

Nearly one in five young women and girls surveyed reported taking a break from the internet entirely to cope with the dangers they face online, and studies show online abuse has a silencing effect on women and girls.

One study by the NGO Girl Effect found girls in five African countries, Jordan, the UK and the US are more likely to block or privatize their accounts and report behavior than boys.

A 2021 study drawing data from two independent large-scale surveys in Norway also found that “targeted women are more likely than targeted men to become more cautious in expressing their opinions publicly.”

I was young and scared, so I was afraid to tell anyone.

CNN and Plan International’s research shows that the “chilling effect” of women in public spaces, such as politics and journalism, in response to online abuse, starts with teen girls, said Professor Gina Neff, executive director of the Minderoo Centre for Technology & Democracy at the University of Cambridge.

“If we’ve got 75% of teen girls saying they have gotten online harassment,” she said, “what happens as they start to develop their careers … professional, outward-facing social media accounts?”

“Blocking and locking” accounts, while dealing with acute danger, are not effective for anyone who wants to have an online persona, said Neff. “We are sending a message that their voices don’t matter and their expectation of being able to be online comes with more risks sometimes than the benefits that they get,” she said.

Fernanda, a 21-year-old participant also from Brazil loves women’s football, but said she is reluctant to engage with online discussions about the sport. “I’m very afraid of commenting on my team’s posts, because I know how toxic the comments are (against girls),” she said adding that taking time offline has helped her deal with the stress.

“When we stay connected for a long time, we feel like, ‘Guys, help, I’m overwhelmed.’”

Where responsibility should lie

A generational gap in understanding online platforms and digital literacy is one reason the young women and girls surveyed and interviewed showed little trust in adults and existing mechanisms to root out abusers and perpetrators.

Mary said one of her classmates, who is transgender, had no one else to turn to after being groomed online. “He mentioned meeting someone last night and earning money from him. I am the only one he told about this because we’re close.”

Estabillo at Plan International Philippines said girls talk to their peers instead because speaking up about child sexual abuse remains a cultural taboo in the Philippines.

“Instead of being helped, they fear being blamed,” Estabillo added.

Experts CNN spoke to stressed the need for tech platforms to take more action.

Current rules and online tools for dealing with ongoing attacks are insufficient, said Neff, as they do not deal with chronic abuse affecting women. “The platform companies have to be held accountable by legislation,” she said.

Proposed laws such as the U.S. Platform Accountability and Transparency Act would mandate researcher access to large platforms’ data – gated by X, for example – which is vital for understanding and alleviating misuse and abuse, and they would also hold platforms to account, said Neff.

A coalition of groups fighting gendered digital abuse in the UK, including Chayn, have also campaigned for platforms to proactively build guards against abuse. Hussain wants to see more platforms prioritize “safety-by-design" and said tech companies are now investing more in controls.

In January 2024, Meta announced new “age-appropriate” restrictions for teen users, automatically limiting “potentially sensitive content” and accounts from their feeds. Both Meta and TikTok also prohibit child sexual exploitation and abuse.

CNN contacted Meta (which owns Facebook, Instagram and WhatsApp) and TikTok for comment about the findings of this research.

Cindy Southworth, Head of Women’s Safety at Meta, said: "We’re continuing to work closely with experts – including Plan International – to better understand the online experience of women and girls and to help make sure they feel good about the time they spend on our apps.”

“This builds on years developing tools, features and policies to help keep them safe, including blocking people from sending images or videos to anyone who doesn’t follow them, testing a new nudity protection feature that will blur potential nudity in DMs, and applying strict rules against bullying, hate speech, and content that encourages suicide, self-harm or eating disorders."

A TikTok spokesperson shared that the platform prevents under-18s seeing sexually suggestive content and prohibits all nudity, pornography and sexually explicit content. It also makes under-16s’ accounts private and unavailable for direct messaging by default, while a pairing tool allows parents to adjust teens’ privacy and content settings. The platform also launched a council for teens to share views on building a safe platform, according to the spokesperson.

But experts warned that platforms have so far failed to outpace spiraling online abuse and harmful content, often implementing safeguards after problems are raised.

Hussain believes a cultural shift is also needed to curb abuse, concluding: “It’s very easy to think of harm as inevitable and unending but it doesn’t have to be.”

How CNN reported this story

CNN As Equals and Plan International collaborated to survey 619 girls and young women and five non-binary people aged 13-24 online through Plan’s country offices in Bolivia, Brazil, Burkina Faso, Colombia, Kenya, Malawi, Nepal, Philippines and Timor-Leste in February 2024. Surveys were shared online across Plan’s networks in these countries for up to one month, inviting girls aged 13-24 to complete them until an adequate sample size was reached.

In the surveys, participants were asked why they go online, what they like to do online, whether they feel safe, what they do if they don’t feel safe, whose responsibility it is to keep them safe, on what platforms they feel least safe, how often they are bothered online, the impact of harms they face and what is lacking or needed to keep them safe online.

The survey is not a scientific poll, or representative of all girls and young women growing up in those countries. But the results highlight the voices and daily experiences of many of these young women face.

These questions were also discussed in a series of focus groups from Brazil, Malawi and the Philippines, with scenarios of the most common harms reported by girls in the surveys were presented and discussed. Participants were also asked for their ideas for solutions to better address and prevent the harms faced by girls online. These took place in March 2024 and participants names were anonymized through pseudonyms chosen by Plan International.

Some figures from the surveys show girls as a percentage of a total who chose one answer from a select range, while others show percentages of girls who chose answers in multiple-choice questions. Regional results include comparisons of differing sample sizes– between South America (203 girls and young women), Africa (240 girls and young women) and Asia (181 girls and young women).

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Delegate Tracker
  • AP & Elections
  • 2024 Paris Olympic Games
  • Auto Racing
  • Movie reviews
  • Book reviews
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

Majority of Democrats think Kamala Harris would make a good president, AP-NORC poll shows

Vice President Kamala Harris said Republican vice-presidential candidate JD Vance told a “compelling” but incomplete story Wednesday night at the Republican National Convention.

Image

Vice President Kamala Harris arrives for an Asian and Pacific Islander American Vote Town Hall, Saturday, July 13, 2024, in Philadelphia. (AP Photo/Joe Lamberti)

  • Copy Link copied

Vice President Kamala Harris speaks at a campaign event in Greensboro, N.C., Thursday, July 11, 2024. (AP Photo/Chuck Burton)

President Joe Biden walks to his car after stepping off of Air Force One at Dover Air Force Base in Delaware, Wednesday, July 17, 2024. Biden is returning to his home in Rehoboth Beach, Del., to self-isolate after testing positive for COVID-19. (AP Photo/Susan Walsh)

WASHINGTON (AP) — As President Joe Biden faces a growing drumbeat of pressure to drop his reelection bid, a majority of Democrats think his vice president would make a good president herself.

A new poll from the AP-NORC Center for Public Affairs Research found that about 6 in 10 Democrats believe Kamala Harris would do a good job in the top slot. About 2 in 10 Democrats don’t believe she would, and another 2 in 10 say they don’t know enough to say.

Since Biden’s debate debacle on June 27, many Democrats have privately and even openly looked to Harris to step in and succeed Biden as the party’s presidential nominee, believing she has a better chance against GOP nominee Donald Trump. For her part, Harris has remained completely loyal to Biden, being one of his toughest defenders in the aftermath of the disastrous debate performance.

Oakley Graham, a Democrat in Greenwood, Missouri, said while he is “pretty happy” with Biden’s accomplishments in office, he felt that he would be more excited to support Harris at the top of the ticket and that it was “about time” a woman becomes president.

“I know he’s got unfinished business,” Graham, 30, said of Biden. “But it would be nice to see a person of color, a woman, somebody younger to step up and to lead that charge. I would hope that that would inspire a younger generation to be more engaged.”

Image

Black adults –- a key contingent of the Democrats’ coalition and a group that remains relatively more favorable to Biden than others — are more likely than Americans overall to say that Harris would do well.

As for Americans more broadly, they are more skeptical of how Harris would perform in the Oval Office. Only about 3 in 10 U.S. adults overall say Harris would do well as president. About half say Harris would not do a good job in the role, and 2 in 10 say they don’t know enough to say.

Harris’ favorability rating is similar to Biden’s, but the share of Americans who have an unfavorable opinion of her is somewhat lower. The poll showed that about 4 in 10 U.S. adults have a favorable opinion of Harris, while about half have an unfavorable opinion. There are more Americans with a negative view of Biden: approximately 6 in 10. About 1 in 10 Americans say they don’t know enough to have an opinion of Harris, whereas nearly everyone has an opinion on Biden.

About three-quarters of Democrats have a positive view of Harris, which is in line with how Democrats view Biden. Seven in 10 have a favorable view of him.

What to know about the 2024 Election

  • Democracy: American democracy has overcome big stress tests since 2020. More challenges lie ahead in 2024.
  • AP’s Role: The Associated Press is the most trusted source of information on election night, with a history of accuracy dating to 1848. Learn more.
  • Stay informed. Keep your pulse on the news with breaking news email alerts. Sign up here .

Shannon Bailey, a Democrat who lives in Tampa, praised Biden’s accomplishments as president –- particularly with his infrastructure law and efforts to tame inflation --- and said he’ll be “remembered fondly.” But she had a more favorable view of Harris than she does the incumbent president because, in Bailey’s view, the vice president appears more “capable of handling the taxing nature of the job.”

“It’s not just the physical stamina part, but also the cognitive reasoning part right now,” said Bailey, 34. “It’s important to be able to concisely and persuasively get the message across that is the Democratic platform right now.”

Bailey said the Democratic Party needs Harris and a running mate “who can really motivate people to go out to the polls” — a task that she’s skeptical Biden can do as effectively.

Harris’s position as the administration’s lead messenger on abortion also has endeared her to many Democrats.

“I think she would be a very strong advocate for abortion, has been and would continue to be,” said Thomas Mattman, a Democrat from Chico, California. “The Republicans have gone with white men as their ticket, and both of them have said some pretty specific things about being opposed to abortion so I think that would be a very strong argument.”

Mattman, 59, said he believes Biden will not be able to defeat Republican nominee Donald Trump — a prospect that leaves Mattman “very distraught.” Harris would be a much more effective candidate because Biden is unable to “put pressure” on his opponent and exploit his weaknesses, Mattman said.

Harris is more popular among Black Americans than she is among white or Hispanic adults. She is more disliked by men than she is by women.

Other prominent Democrats who have been floated as potential replacements are less known than Harris is. About 4 in 10 U.S. adults don’t have an opinion of California Gov. Gavin Newsom , and half are unfamiliar with Michigan Gov. Gretchen Whitmer . Newsom is seen, overall, slightly more negatively than positively. Americans are divided about evenly on Whitmer: 24% have a favorable view and 22% have an unfavorable view.

More Democrats see Harris rather than Newsom or Whitmer as someone who would make a good president, though that’s partly because they’re relative unknowns. About one-third of Democrats say Newsom would make a good president, and half don’t know enough to say. About one-quarter of Democrats say Whitmer would do well, and about two-thirds don’t know enough to say.

Trump’s running mate, Senator JD Vance of Ohio, is unknown to most Americans. In the AP-NORC poll, which was conducted before Trump made Vance his vice presidential choice, 6 in 10 Americans don’t know enough about him to form an opinion. About 2 in 10 U.S. adults have a favorable view of Vance, and about 2 in 10 view him negatively. Among Republicans, 61% don’t know enough to have an opinion of Vance. About one-quarter have a positive view of him, and roughly 1 in 10 have a negative view.

The poll of 1,253 adults was conducted July 11-15, 2024, using a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 3.8 percentage points.

Image

IMAGES

  1. Create a Survey: Top 12 tips to create a good survey!

    how to create a survey for research

  2. Master Survey Design: A 10-step Guide with Examples

    how to create a survey for research

  3. How to Make/Create a Survey [Templates + Examples] 2023

    how to create a survey for research

  4. How To Write A Good Survey Form

    how to create a survey for research

  5. Research Survey Questions

    how to create a survey for research

  6. How to write a survey research paper. How to write better Survey Paper

    how to create a survey for research

VIDEO

  1. Create Survey using Qualtrics

  2. How to Create a Survey in Google Forms

  3. What is a Survey and How to Design It? Research Beast

  4. Precautions of preparing the questionnaire

  5. How to use an Online Survey Maker?

  6. Survey Jobs se Paise kaise Kamye

COMMENTS

  1. How to Create an Effective Survey (Updated 2022)

    Also, if you want to go beyond surveys and develop a multi-faceted listening approach to drive meaningful change and glean actionable insights, make sure to download our guide.. 6.

  2. Writing Survey Questions

    [View more Methods 101 Videos]. An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would "favor or oppose taking military action in Iraq to end Saddam Hussein's rule," 68% said they favored military action while 25% said they opposed military action.

  3. Survey Research

    Step 1: Define the population and sample. Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

  4. Doing Survey Research

    Step 1: Define the population and sample. Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

  5. Questionnaire Design Tip Sheet

    This PSR Tip Sheet provides some basic tips about how to write good survey questions and design a good survey questionnaire.

  6. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  7. Guide: Conducting Survey Research

    Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them.

  8. How to create a good research survey (step-by-step guide)

    Market research: Surveys used by businesses and organizations to analyze customer experience and behavior.They are known as market research surveys.; Academic research: Surveys prepared for use in scientific research require people's participation.; Social research: Surveys used by researchers to learn about social groups so that they can design products/services that serve various needs of ...

  9. How to Write Great Survey Questions (With Examples)

    Example: The government should force you to pay higher taxes. No one likes to be forced, and no one likes higher taxes. This agreement scale question makes it sound doubly bad to raise taxes. When survey questions read more like normative statements than questions looking for objective feedback, any ability to measure that feedback becomes difficult.

  10. 7 Steps In Conducting a Survey Research

    Conducting survey research encompasses gaining insight from a diverse group of people by asking questions and analyzing answers. It is the best way to collect information about people's preferences, beliefs, characteristics, and related information.

  11. Writing Effective Survey Questions

    7 Tips for writing a good survey question. The following 7 tips will help you to write a good survey question: 1. Use clear, simple language. Your survey questions must be easy to understand.

  12. How to Conduct Surveys

    Step 3: Prepare a List of Questions. You can use various types of questions in your survey, such as open-ended, closed-ended, and multiple-choice questions.

  13. Understanding and Evaluating Survey Research

    A variety of methodologic approaches exist for individuals interested in conducting research. Selection of a research approach depends on a number of factors, including the purpose of the research, the type of research questions to be answered, and the availability of resources.

  14. How to Create a Google Forms Survey and Share It With Others

    A Google Forms survey is a free tool that can be used for fun or research purposes, allowing you to customize questions and answers in many ways.

  15. Writing Good Survey Questions: 10 Best Practices

    4. Focus on Closed-Ended Questions. Surveys are, at their core, a quantitative research method.They rely upon closed-ended questions (e.g., multiple-choice or rating-scale questions) to generate quantitative data. Surveys can also leverage open-ended questions (e.g., short-answer or long-answer questions) to generate qualitative data.

  16. A quick guide to survey research

    Medical research questionnaires or surveys are vital tools used to gather information on individual perspectives in a large cohort. Within the medical realm, there are three main types of survey: epidemiological surveys, surveys on attitudes to a health service or intervention and questionnaires assessing knowledge on a particular issue or topic. 1

  17. How to write a survey introduction

    Creating a good introduction for a survey is a crucial part of successful research. Its quality will greatly impact the process. It will improve the end result, including survey completion rates and response accuracy.

  18. 13.1 Writing effective survey questions and questionnaires

    In the previous chapter, we reviewed how researchers collect data using surveys. Guided by their sampling approach and research context, researchers should choose the survey approach that provides the most favorable tradeoffs in strengths and challenges.

  19. How to Make a Questionnaire: Examples + Template

    How to make a questionnaire: Keep questions short and focused on one topic at a time. Use multiple choice questions to fit answers into a specific category. Use an open-ended question to capture comments. A Likert scale or MaxDiff question can be used for market research. Collect responses for your questionnaire using an email collector, anonymous link, or even a QR code.

  20. How to Write a Research Proposal: (with Examples & Templates)

    Before conducting a study, a research proposal should be created that outlines researchers' plans and methodology and is submitted to the concerned evaluating organization or person. Creating a research proposal is an important step to ensure that researchers are on track and are moving forward as intended. A research proposal can be defined as a detailed plan or blueprint for the proposed ...

  21. Attention Buyer: Not All Legal AI Models Are Created Equal

    A 2024 LexisNexis survey of managing partners and C-suite leaders at major law firms and Fortune 1000 companies found that nearly all legal executives (90%) expect their investment in Generative ...

  22. Experiences of Adults Without Kids in the US

    The U.S. fertility rate reached a historic low in 2023, with a growing share of women ages 25 to 44 having never given birth.. And the share of U.S. adults younger than 50 without children who say they are unlikely to ever have kids rose 10 percentage points between 2018 and 2023 (from 37% to 47%), according to a Pew Research Center survey.. In this report, we explore the experiences of two ...

  23. 2024 Gartner/AWESOME Women in Supply Chain Survey: Progress Stalls

    I have read, understood and accepted Gartner Separate Consent Letter , whereby I agree (1) to provide Gartner with my personal information, and understand that information will be transferred outside of mainland China and processed by Gartner group companies and other legitimate processing parties and (2) to be contacted by Gartner group companies via internet, mobile/telephone and email, for ...

  24. Santander bids to create TikTok vibe with Gen Z

    Santander is looking to burnish its credenttials with Gen Z, embarking on a search for talent on TikTok to create an online-native international music band. The bank has launched an international ...

  25. What girls want for a safer future online

    About half of the young women and girls surveyed in Africa (55%) reported seeing or receiving unwanted sexual images. "I started chilling with a guy and he sent me a naked picture and asked me ...

  26. Majority of Democrats think Harris would make a good president, poll

    WASHINGTON (AP) — As President Joe Biden faces a growing drumbeat of pressure to drop his reelection bid, a majority of Democrats think his vice president would make a good president herself.. A new poll from the AP-NORC Center for Public Affairs Research found that about 6 in 10 Democrats believe Kamala Harris would do a good job in the top slot. . About 2 in 10 Democrats don't believe ...