Project Types We Cover

  • Admissions Essay
  • PowerPoint Presentation
  • Research Paper
  • Book Reviews
  • Personal Statement
  • Ph.D Dissertation
  • Proofreading

Academic Fields & Subjects

  • Programming
  • Computer Science
  • Other projects we help with
  • Our Experts
  • Plagiarism Checker

All You Need to Know about the Respondents of the Study

By: Angelina Grin

All You Need to Know about the Respondents of the Study

In terms of research, sampling makes for a crucial part of the methodology. It is the selection of a subset of the population from a larger group with shared characteristics. At some point in your journey of academic career, you may have to conduct primary research, getting you to take the opinion of a target population in the process of investigating a research question. If you haven't yet cut your teeth on research, you may have your head surrounded with many "hows", "whens", and "wheres". But, don't worry, today you're in the right place.

  • Purposive Sampling 
  • Convenience Sampling 
  • Sample Size for Qualitative Research 
  • Random Sampling 

How to Calculate Margin for Error

  • How to Reduce the Chance of Errors 
  • Criteria for Choosing Respondents? 

Purchasing your Panel Respondents

Taking help from social media, in-person contact.

  • How to Convince Respondents to Participate? 

Sampling Methods And Size Suitable For Qualitative Research

Due to its open-ended nature, a qualitative study is always going to be more time-consuming than its counterpart. With this in view, these two research methods are generally preferable while dealing with qualitative research:

Purposive Sampling

It involves choosing the participants based on demographic and other characteristics, such as preferences, taste, income level, etc. in line with the nature and objectives of the study. Unlike random sampling, in purposive one, the researcher actively engages in laying out the criteria for the informants. Only the respondents reasonably satisfying these criteria are invited to partake.

Now the question arises why purposive sampling for qualitative research? As mentioned, such studies have no restriction in terms of time or space to the respondents' input. The participants may adjust the spillways of information to any point. Given this, it is essential for the respondents to be filled with the desired knowledge or be equipped with the relevant skill-sets. A random selection can barely serve this purpose which makes purposeful sampling a perfect fit for this endeavor.

Convenience Sampling

A researcher applies convenience sampling if they target the respondents who are easily available. The logic behind using this type of sampling for qualitative research is simple. As told, qualitative research takes a comparatively greater amount of time. So, the researchers, when faced with time-constraints, may have to make smart choices. It makes sense to rely on people who are on hand rather than moving proposals with a number of respondents with the least assurance of their interest.

Sample Size for Qualitative Research

Since you are consciously engaged in selecting the participants using non-probability methods, it is undesirable to chase a sizable number. What makes a small size more suitable is the amount of time that the researcher has to spend with each participant. A few respondents, handpicked for the study, can serve the purpose because they are sure to have adequate knowledge around the concerned area of research. In most cases, 10-12 participants may cut it for the panel discussion. For an interview, the researchers may rely on 5-10 experts. However, as a thumb-rule, the greater, the better.

Sampling Methods And Size Suitable For Quantitative Research

Quantitative research design ensures the quicker process of data collection. Mostly, the survey respondents or the participants of a questionnaire have to respond to close-ended questions. All they have to do is mark their choice, either on Likert or any other scale or order of questions, and submit. In such types of studies, independent and dependent variables are well defined and the researcher seeks exact measures. This understanding leads to the following choices:

Random Sampling

For sure, the generalization of data is possible only if the results represent the entire target population. In qualitative research, it is attainable through purposive sampling as the respondents can go on to any length. However, in quantitative research, responses are measured in numbers. The greater the number of responses, the more representative those numbers would be.

This can be understood with a simple example. Let's say a town houses 1000 residents and the researcher has to conduct a poll to assess the support of residents for the democratic vs liberal party. Let's say, he selects 50 individuals based on shared characteristics (high educational level and high-income level). Would their responses be generalizable? A big "no", because they represent only the higher income group of their population and not the entire population. A better way to go about it is to randomly pick a decent number of participants. It will come with the probability that the targeted subset of the population is diverse enough to represent the town's population as a whole.

With this being clear that random sampling is a better choice when it comes to quantitative research, let's have a quick look at its key types:

  • Simple random sampling : It is when you choose participants from a large population without regard to any characteristics.
  • Cluster sampling : It involves dividing the entire sampling frame into small clusters and then randomly picking the clusters. You can pick as many clusters as desired depending on the needs and budget.
  • Stratified sampling : It is one of the most widely used methods which involves dividing a population into subsets with shared characteristics (e.g. high-income group, low-income group). Then, you can randomly pick the participants from those subsets. This type of sampling is suited for projects that involve assessing the interaction between participants' responses to a certain phenomenon and given characteristics.
  • Systematic sampling : If your research calls for insights on what every second, third, or Kth person says about something, systematic sampling is going to be the top choice. You obtain the "Kth" number by dividing the total population (N) by the number of participants in the subset of the population (n).

For example, you have to conduct a survey on the customer's opinion about the quality of food in a fast-food restaurant. Let's say the restaurant serves 100 customers on average every single day and you think responses from 20 customers would be highly representative. Here, your population size is "100" and sample size "20". Let's apply the formula:

N/n=Kth; 100/20=5(th)

Hence, you can randomly approach each 5th customer pouring out of the restaurant.

Acceptable Margin for Error

During the statistical analysis of information gathered from a sample, what you may have to worry about is 'error'. Howsoever precautions you may be and whatsoever techniques you may apply, the margin for error is always there. All you can do is minimize that margin to an acceptable range.

Acceptable margin may vary with the type and objectives of the research. Having said that, there is considerable consensus over 4%-8% being the permissible range when the confidence interval is set at 95% (say if a poll or survey is repeated 95 percent of times, the findings will not deviate more than 4% to 8%).

If the range of error exceeds that margin, it will put the reliability and transferability of your research to question.

In general, you can apply the following formula to assess the margin for error in research:

Margin for Error = z-score (constant value derieved from the confidence interval) x standard deviation/underroot "n" (sample population size)

For example, your research is focused on a sample of 100 participants with a standard deviation of 0.5 at a confidence interval of 95%. Suppose, the z-score is 1.9. The margin of error will be as under:

Margin for Error = 1.9 x 0.5/10 = 0.095

How to Reduce the Chance of Errors

To continuously restrict the chances of errors to an acceptable range, you should make sure:

  • The sample size is large enough to be representative of the target population for a particular research question;
  • You keep your personal bias suspended while interacting with the population;
  • The results are accurately calculated and presented;
  • You are well aware of your population's characteristics.

Criteria for Choosing Respondents?

Here's what to make sure while fishing around for respondents:

  • Participants are knowledgeable: Make sure the participants have enough grasp of the subject or case of the study at hand;
  • Participants are available in the given time: Your respondents must respond within a given time-frame. Schedule interviews and questionnaires keeping the budget constraints in view. If a prospective respondent may not be able to show up within a given time, you may drop them and go for a more suitable one.

Where To Find Respondents?

where-to-find-respondents

Your panel will consist of a group of people who should belong to a relevant background and be agreeable to participating in your survey research. There are multiple services available that are selling bespoke and ready-made participant lists, such as Survey Monkey, Survey Savvy, My Points, Inbox Dollars, and Branded Surveys to name only a few. You can easily access a wider selection of people who have the potential of being your prospective respondents.

Depending on what your research objectives are, online surveys can be a great method to target more potential participants. This can be done by widely sharing your survey with people on several social media platforms. The beauty of using this option is that you may also invite your relatives and friends with their social media accounts to help you out.

With a clear target audience in mind, build your survey design around meeting the audience where they are. This is beneficial in the cases where you are seeking participation from demographics who fail to respond online. For example a sample of elderly people who are not tech-savvy. For research focusing on the elderly population, you may visit senior citizen community halls and other places where people from this age group congregate. You may find your respondents at shopping malls, outside a large outlet, passing by a roadside, etc. You may ask the respondents to show their voluntary consent to the participant to share their phone numbers (obviously, on a condition of confidentiality). Since COVID has limited the in-person interaction, you may encourage the participants, found at any of these spots, to share their phone numbers (for the sole purpose of research). Later on, you can conduct telephone surveys according to the schedules agreed upon.

How to Convince Respondents to Participate?

The response rate is largely hinged on your success in convincing the respondents to partake. You may utilize any of these methods to maximize participation:

  • Showing the respondents the importance of partaking for them, as well as for the intended audience (yes, the ethical appeal may come in handy);
  • Keeping the process of participation smooth; participants may feel reluctant if it comes with the irksome hassle;
  • Incentivizing the participation (e.g. monetary benefit or meal) if the budget allows.

With all these tips taken home, you are on your way to ace your upcoming research assignment. Trial and error is the part of the game but what you have learned in this post can pave the path to the best possible outcomes, be it your maiden or 100th project.

User ratings:

User ratings is 4.6 stars.

4.6 /5 ( 9 Votes)

how to write the research respondents

Creative Writer and Blog Editor

Despite my relatively young age, I am a professional writer with more than 14 years of experience. I studied journalism at the university, worked for media and digital agencies, and organized several events for ed-tech companies. Yet for the last 6 years, I've worked mainly in marketing. Here, at Studybay, my objective is to make sure all our texts are clear, informative, and engaging.

Add Your Comment

We are very interested to know your opinion

how to write the research respondents

Upgrade your writing skills!

Try our AI essay writer from Studybay today!

Aha - Online qualitative consumer research software and services

How to create a successful qualitative research study: Building a better respondent guide

Ray Fischer Marketing Research Tips , Online Research Tips online marketing research studies , qualitative marketing research , respondent compliance , respondent guide

Online qualitative is a powerful tool. Used properly for creating a marketing research study, it can generate rich consumer insights from engaged and willing respondents.  Used improperly it can create a mountain of useless data that was painful for the respondents to generate and is exponentially more tedious and time-consuming for consultants to analyze.

The keys to architecting a great online research study is obviously understanding the objectives and then creating a series of activities that humanize your respondents and encourage them to openly share their thoughts and feelings in a way that does not feel like work but is actually something respondents look forward to doing.

You know you have achieved this research nirvana when the online qual study is over and you get unsolicited notes from respondents saying how much they enjoyed the study and that it made them think about themselves and the subject at hand in ways they had never anticipated. A more pragmatic and bankable indicator of a successful guide is a 95%+ successful completion/compliance rate; nearly everyone completed all of the tasks on time with quality responses.

What makes a online respondent guide great?

Follow these simple guidelines and you are on your way to creating a better qualitative study and higher respondent completions rates:

Full Disclosure: Make sure respondents know what they are doing and why they are in the study. Tell them what is expected; what they will be doing, the schedule of events and when things are due. This takes the mystery out of the process and makes them feel sense of responsibility to provide you with focused, and connected responses.

Bend‘em But Don’t Break‘em. Challenge their creativity push their introspection but don’t overwhelm them with too many questions and tasks that will dull their senses and make them feel like they are working on a term paper. Our prevailing thought is that 20-30 minutes is the optimal amount of time for a respondent to engage in an activity at one time. Beyond 20-30 minutes the law of diminishing returns sets in for the respondent and the researcher – and tired respondents drop out.

Design With Analysis In Mind. The data from online studies can be overwhelming if the study has too many activities and too many questions. For example, think about 50 respondents answering each question in a study. Ask the right questions, encourage them to elaborate on the important stuff and don’t use the method to ask every question a client may have. It can seem like an unlimited opportunity to ask as many questions as you want…resist the temptation.

Be Creative…Straight Q&A Can Be Boring. Using methods such as “storytelling”, “letters to friends” and “collage” can make things interesting and engaging to the respondents and more importantly provide rich symbolic insights for researchers. Well-planned open-ended activities will generate more learning for researchers; respondents love to talk about themselves and creative activities allow then to emote more freely than in straight Q & A approaches.

Successful online research studies take some planning. Getting off to great start with your respondents is one of the keys to finding deep and meaningful insights.

Ray Fischer, CEO of Aha!

Related posts.

Designing Activity-Based Online Market Research Studies in an Agile World

Agile Market Research , Marketing Research Tips

Designing Activity-Based Online Market Research Studies in an Agile World

Aha Insights Activity-based consumer research - Montivation

Marketing Research Tips , Qualitative Marketing Research

Market Research Methods: How to Leverage Activities to Better Understand Motivation

innovative_online_focus_group_technology

Marketing Research Tips , Online Research Tips , Qualitative Marketing Research

6 Innovative Ways to Use Live Webcam in Today’s Market Research World

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 12 August 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

  • Search Menu
  • Sign in through your institution
  • Ageing - Other
  • Bladder and Bowel Health
  • Cardiovascular
  • Community Geriatrics
  • Dementia and Related Disorders
  • End of Life Care
  • Ethics and Law
  • Falls and Bone Health
  • Frailty in Urgent Care Settings
  • Gastroenterology and Clinical Nutrition
  • Movement Disorders
  • Perioperative Care of Older People Undergoing Surgery
  • Pharmacology and therapeutics
  • Respiratory
  • Sarcopenia and Frailty Research
  • Telemedicine
  • Advance articles
  • Editor's Choice
  • Supplements
  • Themed collections
  • The Dhole Eddlestone Memorial Prize
  • 50th Anniversary Collection
  • Author Guidelines
  • Submission Site
  • Open Access
  • Reasons to Publish
  • Advertising and Corporate Services
  • Journals Career Network
  • Advertising
  • Reprints and ePrints
  • Sponsored Supplements
  • Branded Books
  • About Age and Ageing
  • About the British Geriatrics Society
  • Editorial Board
  • Self-Archiving Policy
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, describing the distribution of values, descriptive statistics in text, descriptive statistics in tables, describing loss of participants in a study, comparing baseline characteristics in rcts, conclusions, acknowledgements, conflicts of interest.

  • < Previous

Describing the participants in a study

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

R. M. Pickering, Describing the participants in a study, Age and Ageing , Volume 46, Issue 4, July 2017, Pages 576–581, https://doi.org/10.1093/ageing/afx054

  • Permissions Icon Permissions

This paper reviews the use of descriptive statistics to describe the participants included in a study. It discusses the practicalities of incorporating statistics in papers for publication in Age and Aging , concisely and in ways that are easy for readers to understand and interpret.

Most papers reporting analysis of clinical data will at some point use statistics to describe the socio-demographic characteristics and medical history of the study participants. An important reason for doing this is to give the reader some idea of the extent to which study findings can be generalised to their own local situation. The production of descriptive statistics is a straightforward matter, most statistical packages producing all the statistics one could possibly desire, and a choice has to be made over which ones to present. These then have to be included in a paper in a manner that is easy for readers to assimilate. There may be constraints on the amount of space available, and it is in any case a good idea to make statistical display as concise as possible. This article reviews the statistics that might be used to describe a sample of older people, and gives tips on how best to do this in a paper for publication in Age and Aging . It builds on a previously published paper [ 1 ].

The values observed in a group of subjects, when measurements of a quantitative characteristic are made, are called the distribution of values. Graphical displays can be used to show the detail of the distribution in a variety of ways, but they take up a considerable amount of space. A precis of two key features of the distribution, its centre and its spread, is usually presented using descriptive statistics. The centre of a distribution can be described by its mean or median, and the spread by its standard deviation (SD), range, or inter-quartile range (IQR). Definitions and properties of these statistics are given in statistical textbooks [ 2 ].

Figure 1 a shows an idealised symmetric distribution for a quantitative variable. The mean might be used here to describe where the centre of the distribution lies and the SD to give an idea of how spread out values are around the centre. SDs are particularly appropriate where a symmetric distribution approximately follows the bell-shaped pattern shown in Figure 1 a which is called the normal distribution. For such a distribution the large majority, 95%, of values observed in a sample will fall between the values two SDs above and below the mean, called the normal range. Presentation of the mean and SD invites the reader to calculate the normal range and think of it as covering most of the distribution of values. Another reason for presenting the SD is that it is required in calculations of sample size for approximately normally distributed outcomes, and can be used by readers in planning future studies. A graphical display of approximately normally distributed real data (age at admission amongst 373 study participants) is shown in Figure 1 c: with relatively small sample size a smooth distribution such as that shown in Figure 1 a cannot be achieved. The mean (82.9) and SD (6.8) of the age distribution lead to the normal range 69.3–96.5 years, which can be seen in Figure 1 c to cover most of the ages in the sample: 14 subjects fall below 69.3 and 7 fall above 96.5, so that the range actually covers 352 (94.4%) of the 373 participants, close to the anticipated 95%. For familiar measurements, such as age, there is additional value in presenting the range, the minimum and maximum values attained. Knowing that the study included people aged between 65 and 101 years is immediately meaningful, whereas the value of the SD is more difficult to interpret.

Idealised and real data distributions. (a) Symmetrical distribution. (b) Skewed distribution. (c) Dotplot (each dot representing one value) of an approximate symmetrical distribution indicating the normal range: age in years at admission (n = 373). (d) Dotplot (each dot representing one value) of a skewed distribution with outliers emphasised and indicating mean and median: hours in A&E (n = 348).

Idealised and real data distributions. (a) Symmetrical distribution. (b) Skewed distribution. (c) Dotplot (each dot representing one value) of an approximate symmetrical distribution indicating the normal range: age in years at admission ( n = 373). (d) Dotplot (each dot representing one value) of a skewed distribution with outliers emphasised and indicating mean and median: hours in A&E ( n = 348).

When a distribution is skewed (Figure 1 b) just one or two extreme values, ‘outliers’, in one of the tails of the distribution (to the right in Figure 1 b) pull the mean away from the obvious central value. An alternative statistic describing central location is the median, defined as the point with 50% of the sample falling above it and 50% below. Figure 1 d shows the distribution of real data (hours in A&E amongst 348 study participants) following a skewed distribution. A few excessively long A&E stays pull the mean to the higher value of 4.9 h compared to the median of 4.4 h: the effect would be greater with a higher proportion of subjects having long stays. The median is often recommended as the preferred statistic to describe the centre of a skewed distribution, but the mean can be helpful. If the attribute being described takes only a limited number of values, the medians of two groups can take the same value in spite of substantial differences in the tails. In these circumstances, the mean can be sensitive to an overall shift in distribution while the median is not. When a comparison of cost based on length of stay is to be made, presenting means of the skewed distributions facilitates calculation of cost savings per subject by applying unit cost to the difference in means. Figure 1 b suggests that the value with highest frequency might be a useful descriptor of the centre of a distribution. In practice, this can prove awkward: depending on the precision of measurement there may be no value occurring more than once.

It is clear from Figure 1 b that no single number can adequately describe the spread of a skewed distribution because spread is greater in one direction than the other. The range (from 1.7 to 40.3 h in A&E in our skewed example) could be used. Another possibility is the IQR (from 3.5 to 5.4 h in A&E) covering the central 50% of the distribution. The SD may be presented even though a distribution is skewed, and could be useful to readers for approximate power calculations, but the normal range derived from the mean and SD will be misleading. With mean(SD) = 4.9(3.2), the lower limit of the normal range of hours in A&E is the impossible negative value of –1.5 h, while the upper limit of 11.3 h lies well below the extreme values exhibited in Figure 1 d.

Descriptive statistics may be presented in text, for example [ 3 ]:

Participants’ ages ranged from 50 to 87 years ( M  = 66.1, SD = 7.8) with 56% identified as female, 64% married or partnered, 23% reported being retired or not working, 55% had post-secondary and higher education, and <20% reported living alone. Over 60% of the participants identified as NZ European. The mean of net personal annual income was $34,615. The participants reported the diagnosis of an average of 2.63 (±2.07) chronic health conditions, with 50% reported having three or more chronic health conditions.

There are perhaps too many attributes (age, gender, marital status, employment status, educational level, living arrangements, nationality, personal income and number of chronic conditions) being described in the excerpt above: it would be easier to assimilate this information from a table.

Characteristics of subjects at admission and their operations before (1998/99) and after (2000/01) implementation of a care pathway [ 4 ]. Figures are number (% of non-missing values) unless otherwise stated

1998/99 (  = 395)2000/01 (  = 373)
Age on admission (years)
 Mean (SD)83 (7)83 (7)
 Minimum–maximum65–10165–101
Gender
 Male90 (23%)90 (24%)
 Female305 (77%)283 (76%)
Admission domicile
 Own home219 (55%)202 (54%)
 Sheltered accommodation47 (12%)58 (16%)
 Residential care90 (23%)83 (22%)
 Nursing home18 (5%)15 (4%)
 Other ward SUHT7 (2%)2 (1%)
 Other trust14 (4%)13 (4%)
Ambulation score
 Bed/chair bound8 (2%)5 (1%)
 Presence 1+12 (3%)7 (2%)
 1 person25 (6%)20 (5%)
 Unable 50 m145 (37%)138 (38%)
 Able 50 m200 (51%)197 (54%)
(  = 390)(  = 367)
Time in A&E (h)
 Mean (SD)4.9 (3.2)5.6 (2.4)
 Minimum–maximum1.7–40.30–21.4
(  = 348)(  = 328)
History of dementia79 (20%)85 (23%)
(  = 395)(  = 371)
Confused on admission124 (32%)125 (34%)
(  = 394)(  = 371)
Type of fracture
 Intra-capsular192 (54%)173 (52%)
 Extra-capsular165 (46%)161 (48%)
(  = 357)(  = 334)
Operation more than 48 h after ward admission183 (52%)205 (64%)
(  = 354)(  = 323)
Reason for delayed operation
 Medical61 (35%)74 (43%)
 Organisational66 (38%)72 (42%)
 Both45 (26%)27 (16%)
(  = 172)(  = 173)
Type of operation
 Thompson's hemiarthroplasty101 (27%)87 (24%)
 Austin-Moore hemiarthroplasty69 (19%)18 (5%)
 Dynamic screw162 (43%)165 (46%)
 Asnis screws38 (11%)38 (11%)
 Bipolar hemiarthroplasty3 (1%)48 (14%)
(  = 373)(  = 356)
Grade of surgeon
 Consultant46 (12%)110 (32%)
 SPR318 (86%)220 (63%)
 SHO6 (2%)18 (5%)
(  = 355)(  = 348)
Grade of anaesthetist
 Consultant1206 (34%)175 (55%)
 SPR99 (28%)52 (16%)
 SHO133 (38%)81 (29%)
(  = 352)(  = 318)
1998/99 (  = 395)2000/01 (  = 373)
Age on admission (years)
 Mean (SD)83 (7)83 (7)
 Minimum–maximum65–10165–101
Gender
 Male90 (23%)90 (24%)
 Female305 (77%)283 (76%)
Admission domicile
 Own home219 (55%)202 (54%)
 Sheltered accommodation47 (12%)58 (16%)
 Residential care90 (23%)83 (22%)
 Nursing home18 (5%)15 (4%)
 Other ward SUHT7 (2%)2 (1%)
 Other trust14 (4%)13 (4%)
Ambulation score
 Bed/chair bound8 (2%)5 (1%)
 Presence 1+12 (3%)7 (2%)
 1 person25 (6%)20 (5%)
 Unable 50 m145 (37%)138 (38%)
 Able 50 m200 (51%)197 (54%)
(  = 390)(  = 367)
Time in A&E (h)
 Mean (SD)4.9 (3.2)5.6 (2.4)
 Minimum–maximum1.7–40.30–21.4
(  = 348)(  = 328)
History of dementia79 (20%)85 (23%)
(  = 395)(  = 371)
Confused on admission124 (32%)125 (34%)
(  = 394)(  = 371)
Type of fracture
 Intra-capsular192 (54%)173 (52%)
 Extra-capsular165 (46%)161 (48%)
(  = 357)(  = 334)
Operation more than 48 h after ward admission183 (52%)205 (64%)
(  = 354)(  = 323)
Reason for delayed operation
 Medical61 (35%)74 (43%)
 Organisational66 (38%)72 (42%)
 Both45 (26%)27 (16%)
(  = 172)(  = 173)
Type of operation
 Thompson's hemiarthroplasty101 (27%)87 (24%)
 Austin-Moore hemiarthroplasty69 (19%)18 (5%)
 Dynamic screw162 (43%)165 (46%)
 Asnis screws38 (11%)38 (11%)
 Bipolar hemiarthroplasty3 (1%)48 (14%)
(  = 373)(  = 356)
Grade of surgeon
 Consultant46 (12%)110 (32%)
 SPR318 (86%)220 (63%)
 SHO6 (2%)18 (5%)
(  = 355)(  = 348)
Grade of anaesthetist
 Consultant1206 (34%)175 (55%)
 SPR99 (28%)52 (16%)
 SHO133 (38%)81 (29%)
(  = 352)(  = 318)

The distributions of the two quantitative variables in Table 1 are described by mean (SD) and range. The statistics being presented should be stated in the context of the table, here in the left hand column, and could differ across variables. If the same statistics are presented for all the variables in a table they can be indicated in the column headings or title. From the mean (SD) and range in each phase, we can see that the age distribution is reasonably symmetrical because the mean falls close to the centre of the range, and the mean ± 2 SD approach the limits of the range. The distribution of hours in A&E is skewed to the right but has been summarised with the same statistics. We can see that the distribution is skewed because the mean is much closer to the minimum than the maximum, and, if the normal range is calculated, the upper limit does not approach the high values in either phase. For these reasons, the normal range should not be interpreted as covering 95% of values. These conclusions from descriptive statistics alone can be verified in Figure 1 c and d.

A choice arises when describing the distribution of an ordinal variable indicating ordered response categories, such as ambulation score in Table 1 . If the variable takes many distinct values, it can be treated as a quantitative variable and described in terms of centre and spread: ordinal variables often extend from the minimum to maximum possible values and in this case stating the range is not helpful. The meaning of the extremes should be stated in the context of the table to aid interpretation of results. Ordinal variables taking only a few distinct values are better treated as categorical variables and number (%) presented for each category. With only five categories the latter approach was adopted for ambulation score. Display as a categorical variable can be facilitated by combining infrequently occurring adjacent values.

In the original study, 3,182 of 5,719 admissions were screened and 2,286 were eligible. Six hundred and ten patients were not available on the hospital units when the RA [Research Assistant] arrived to complete the CAM [Confusion Assessment Method]; 1,582 patients assented to complete the CAM and 94 patients did not assent; the CAM was not completed for 728 patients because an informant was not available to confirm an acute change and fluctuation in mental status prior to admission or enrolment. The CAM was completed for 854 patients; 375 had delirium; 278 were enroled. Of the 278 enroled patients, 172 were discharged before the follow-up assessment, 73 were still hospitalised, 8 withdrew from the study and 27 died. Of the 172 discharged patients, delirium recovery status was determined for 152, 16 withdrew from the study after discharge and 4 died.

The authors start with the 5,719 admissions and report the numbers lost at successive stages, to arrive at the analysis sample of 152. It may be easier to assimilate the detail of the process from tabular or graphical presentation. The CONSORT guidelines [ 6 ] concerning the reporting of Randomised Controlled Trials (RCTs) recommend that progress of participants through a trial be presented as a flow chart, and an example is shown in Figure 2 . These charts are unequivocally helpful and are now presented in studies other than RCTs.

Recruitment and attrition rates in an RCT of WiiActive exercises in community dwelling older adults [7].

Recruitment and attrition rates in an RCT of WiiActive exercises in community dwelling older adults [ 7 ].

In addition to loss of participants at each time point as shown in a flow chart, information on specific variables may be missing even though a participant was available at the study point in question. Taking Table 1 as an example, there were 395 and 373 admissions during the 1998/99 and 2000/01 phases, respectively, as stated in the column headings, but the number of participants providing information varies considerably across the characteristics in the table. The reader should be able to establish how many cases contribute to each result, and to this end wherever the number available is lower than the total for the phase, it is stated below the descriptive statistics. For example, ambulation score was only available for 390 of the 395 participants in the 1998/99 phase. The percentages presented for ambulation score were calculated amongst cases where information was available, and this was done for all percentages in the table as indicated in the title. Alternatively, missing values in a categorical variable may be treated as a category in their own right. Where there is a large amount of missing information, this may be the best way of handling the situation with percentages calculated from the total sample size as denominator. Stating the numbers available allows the reader to check this point. Only participants whose operation was delayed by more than 48 h, gave a ‘reason why operation was delayed’ in the table, and from the stated numbers the reader can see that a reason was not given for all delayed cases.

In reports of RCTs, a table describing baseline characteristics in each trial arm demonstrates whether or not randomisation was successful in producing similar groups, as well as addressing the generalisability issue. If there are differences at baseline, comparison of outcome may be confounded. Statistical tests of significance should not be used to decide whether any differences need to be taken into account [ 8 , 9 ]. If the allocation was properly randomised, we know that any differences at baseline must be due to chance. The question facing the researcher is whether or not the magnitude of a difference at baseline is sufficient to confound comparison of outcome, and this depends on the strength of the relationship between the potential confounder and the outcome, as well the baseline difference. A statistical test for baseline differences does not address this question; furthermore, there may be insufficient numbers available to detect quite large baseline differences. Statistics describing baseline characteristics are used to judge whether any differences are large enough to be important. If they are, additional analyses of outcome controlled for characteristics that differ at baseline may be performed. On the other hand, in non-randomised studies, groups are likely to differ, and statistical significance tests can be used to evaluate the evidence that the selection process of patients to each intervention results in different groups. In this situation a primary analysis controlled for many predictors of outcome would probably have been planned, and should be carried out irrespective of any differences, or lack of them, between study groups.

Describing the main features of the distribution of important characteristics of the participants included in a study is the first step in most papers reporting statistical analysis. It is important in establishing the generalisability of research findings, and in the context of comparative studies, flags the need for controlled analysis. Usually space constraints limit the presentation of many descriptive statistics, and in any case, too many statistics can confuse rather than enhance insight. The attrition of subjects during a study should also be described, so that study subjects can be related to the patient base from which they were drawn.

Descriptive statistics are used to describe the participants in a study so that readers can assess the generalisability of study findings to their own clinical practice.

They need to be appropriate to the variable or participant characteristic they aim to describe, and presented in a fashion that is easy for readers to understand.

When many patient characteristics are being described, the detail of the statistics used and number of participants contributing to analysis are best incorporated in tabular presentation.

The author would like to thank Dr Helen Roberts for kindly granting permission to use data from the care pathway study [ 4 ] to produce Figure 1 c and d.

None declared.

Pickering RM . Describing the subjects in a study . Palliat Med 2001 ; 15 : 69 – 75 .

Google Scholar

Altman DG . Practical Statistics for Medical research . London : Chapman & Hall , 1991 .

Google Preview

Yeung P , Breheny M . Using the capability approach to understand the determinants of subjective well-being among community-dwelling older people in New Zealand . Age Aging 2016 ; 45 : 292 – 8 .

Roberts HC , Pickering RM , Onslow E et al.  . The effectiveness of implementing a care pathway for femoral neck fracture in older people: a prospective controlled before and after study . Age Aging 2004 ; 33 : 178 – 84 .

Cole MG , McCusker JM , Bailey R et al.  . Partial and no recovery from delirium after hospital discharge predict increased adverse events . Age Aging 2017 ; 46 : 90 – 5 .

Schulz KF , Altman DG , Moher D , for the CONSORT Group . CONSORT 2010 statement: updated guidelines for reporting parallel-group randomised trials . BMJ 2010 ; 340 : 698 – 702 .

Kwok BC , Pua YH . Effects of WiiActive exercises on fear of falling and functional outcomes in community-dwelling older adults: a randomised control trial . Age Aging 2016 ; 45 : 621 – 28 .

Assman SF , Pocock SJ , Enos LE , Kasten LE . Subgroup analysis and other (mis)uses of baseline data in clinical trials . Lancet 2000 ; 355 : 1064 – 9 .

Altman DG . Comparability of randomized groups . Statistician 1985 ; 34 : 125 – 36 .

  • descriptive statistics
Month: Total Views:
May 2017 23
June 2017 62
July 2017 73
August 2017 53
September 2017 34
October 2017 89
November 2017 38
December 2017 59
January 2018 32
February 2018 12
March 2018 42
April 2018 45
May 2018 50
June 2018 40
July 2018 172
August 2018 255
September 2018 231
October 2018 289
November 2018 809
December 2018 1,101
January 2019 1,217
February 2019 1,418
March 2019 1,745
April 2019 1,633
May 2019 1,772
June 2019 1,136
July 2019 1,088
August 2019 1,091
September 2019 1,436
October 2019 1,933
November 2019 1,706
December 2019 1,447
January 2020 1,553
February 2020 2,191
March 2020 2,291
April 2020 3,369
May 2020 2,057
June 2020 2,624
July 2020 2,439
August 2020 2,584
September 2020 2,905
October 2020 3,179
November 2020 3,068
December 2020 2,768
January 2021 2,626
February 2021 2,429
March 2021 3,452
April 2021 3,830
May 2021 3,102
June 2021 2,528
July 2021 2,016
August 2021 1,848
September 2021 2,188
October 2021 2,649
November 2021 2,488
December 2021 2,142
January 2022 2,073
February 2022 2,164
March 2022 2,761
April 2022 3,154
May 2022 3,308
June 2022 2,185
July 2022 1,754
August 2022 2,090
September 2022 2,211
October 2022 2,497
November 2022 2,790
December 2022 2,471
January 2023 2,270
February 2023 2,359
March 2023 2,714
April 2023 3,028
May 2023 3,292
June 2023 2,366
July 2023 1,774
August 2023 1,588
September 2023 1,330
October 2023 1,571
November 2023 1,456
December 2023 1,293
January 2024 1,699
February 2024 1,815
March 2024 4,180
April 2024 2,115
May 2024 1,819
June 2024 1,047
July 2024 1,142
August 2024 538

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-2834
  • Copyright © 2024 British Geriatrics Society
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

how to write the research respondents

Conducting Survey Research

Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them. The questionnaire, or survey, can be a written document that is completed by the person being surveyed, an online questionnaire, a face-to-face interview, or a telephone interview. Using surveys, it is possible to collect data from large or small populations (sometimes referred to as the universe of a study).

Different types of surveys are actually composed of several research techniques, developed by a variety of disciplines. For instance, interview began as a tool primarily for psychologists and anthropologists, while sampling got its start in the field of agricultural economics (Angus and Katona, 1953, p. 15).

Survey research does not belong to any one field and it can be employed by almost any discipline. According to Angus and Katona, "It is this capacity for wide application and broad coverage which gives the survey technique its great usefulness..." (p. 16).

Types of Surveys

Surveys come in a wide range of forms and can be distributed using a variety of media.

Mail Surveys

Group administered questionnaires, drop-off surveys, oral surveys, electronic surveys.

  • An Example Survey

Example Survey

General Instructions: We are interested in your writing and computing experiences and attitudes. Please take a few minutes to complete this survey. In general, when you are presented with a scale next to a question, please put an X over the number that best corresponds to your answer. For example, if you strongly agreed with the following question, you might put an X through the number 5. If you agreed moderately, you might put an X through number 4, if you neither agreed nor disagreed, you might put an X through number 3.

Example Question:

 

 

Strongly Disagree

Strongly Agree

I like to read magazines like TIME or Newsweek.

1

2

3

4

5

As is the case with all of the information we are collecting for our study, we will keep all the information you provide to us completely confidential. Your teacher will not be made aware of any of your responses. Thanks for your help.

Your Name: ___________________________________________________________

Your Instructor's Name: __________________________________________________

 

Expectations about Writing:

Very Little

     

Very Much

1. In general, how much writing do you think will be required in your classes at CSU?

1

2

3

4

5

2. How much writing do you think you will be required to do after you graduate?

1

2

3

4

5

3. How important do you think writing will be to your career?

1

2

3

4

5

 

Grades:

         

4. In this class, I expect to receive a grade of . . . .

A

B

C

D

F

5. In previous writing classes, I have usually received a grade of . . .

A

B

C

D

F

           

 

Attitudes about Writing:

Strongly Disagree

Strongly Agree

6. Good writers are born, not made.

1

2

3

4

5

7. I avoid writing.

1

2

3

4

5

8. Some people have said, "Writing can be learned but it can't be taught." Do you believe it can be learned?

1

2

3

4

5

9. Do you believe writing can be taught?

1

2

3

4

5

10. Practice is the most important part of being a good writer.

1

2

3

4

5

11. I am able to express myself clearly in my writing.

1

2

3

4

5

12. Writing is a lot of fun.

1

2

3

4

5

13. Good teachers can help me become a better writer.

1

2

3

4

5

14. Talent is the most important part of being a good writer.

1

2

3

4

5

15. Anyone with at least average intelligence can learn to be a good writer.

1

2

3

4

5

16. I am no good at writing.

1

2

3

4

5

17. I enjoy writing.

1

2

3

4

5

18. Discussing my writing with others is an enjoyable experience.

1

2

3

4

5

19. Compared to other students, I am a good writer.

1

2

3

4

5

20. Teachers who have read my writing think I am a good writer.

1

2

3

4

5

21. Other students who have read my writing think I am a good writer.

1

2

3

4

5

22. My writing is easy to understand.

1

2

3

4

5

 

Experiences in Previous Writing Classes:

Strongly Disagree

Strongly Agree

23. On some of my past writing assignments, I have been required to submit rough drafts of my papers.

1

2

3

4

5

24. I've taken some courses that focused primarily on spelling, grammar, and punctuation.

1

2

3

4

5

25. In previous writing classes, I've had to revise my papers.

1

2

3

4

5

26. Some of my former writing teachers were more interested in my ideas than in my spelling, punctuation, and grammar.

1

2

3

4

5

27. In some of my former writing classes, I've commented on other students' papers.

1

2

3

4

5

28. In some of my former writing classes, I spent a lot of time working in groups.

1

2

3

4

5

29. Some of my former teachers acted as though the most important part of writing was spelling, punctuation, and grammar.

1

2

3

4

5

           

Please indicate the TIMES PER MONTH or HOURS PER WEEK you engage in the following activities:

         
           

Writing Activities: How many TIMES PER MONTH do you ...

         

30. Write in your journal

0

1

2

3

4+

31. Write poetry on your own

0

1

2

3

4+

32. Write letters to friends or family

0

1

2

3

4+

33. Write fiction

0

1

2

3

4+

34. Write papers for class

0

1

2

3

4+

35. Write for publication

0

1

2

3

4+

           

Reading Activities: How many HOURS PER WEEK do you ...

         

36. Read the newspaper

0

1

2

3

4+

37. Read fiction for pleasure

0

1

2

3

4+

38. Read magazines

0

1

2

3

4+

39. Read for class

0

1

2

`3

4+

           

 

Attitudes about Computers:

Strongly Disagree

Strongly Agree

40. The challenge of learning about computers is exciting.

1

2

3

4

5

41. I am confident that I can learn computer skills.

1

2

3

4

5

42. Anyone can learn to use a computer if they are patient and motivated.

1

2

3

4

5

43. Learning to operate computers is like learning any new skill-- the more you practice, the better you become.

1

2

3

4

5

44. I feel apprehensive about working with computers.

1

2

3

4

5

45. I have difficulty in understanding the technical aspects of computers.

1

2

3

4

5

46. It scares me to think that I could cause the computer to destroy a large amount of information by hitting the wrong key.

1

2

3

4

5

47. You have to be a genius to understand all the special commands used by most computer programs.

1

2

3

4

5

48. If given the opportunity, I would like to learn about and use computers.

1

2

3

4

5

49. I have avoided computers because they are unfamiliar and somewhat intimidating to me.

1

2

3

4

5

50. I feel computers are necessary tools in both educational and work settings.

1

2

3

4

5

51. I own my own computer.

No

     

Yes

52. I don't own my own computer, but I regularly use my parents' or a friend's computer.

No

     

Yes

Written Surveys

Imagine that you are interested in exploring the attitudes college students have about writing. Since it would be impossible to interview every student on campus, choosing the mail-out survey as your method would enable you to choose a large sample of college students. You might choose to limit your research to your own college or university, or you might extend your survey to several different institutions. If your research question demands it, the mail survey allows you to sample a very broad group of subjects at small cost.

Strengths and Weaknesses of Mail Surveys

Cost: Mail surveys are low in cost compared to other methods of surveying. This type of survey can cost up to 50% less than the self-administered survey, and almost 75% less than a face-to-face survey (Bourque and Fielder 9). Mail surveys are also substantially less expensive than drop-off and group-administered surveys.

Convenience: Since many of these types of surveys are conducted through a mail-in process, the participants are able to work on the surveys at their leisure.

Bias: Because the mail survey does not allow for personal contact between the researcher and the respondent, there is little chance for personal bias based on first impressions to alter the responses to the survey. This is an advantage because if the interviewer is not likeable, the survey results will be unfavorably affected. However, this could be a disadvantage as well.

Sampling--internal link: It is possible to reach a greater population and have a larger universe (sample of respondents) with this type of survey because it does not require personal contact between the researcher and the respondents.

Low Response Rate: One of the biggest drawbacks to written survey, especially as it relates to the mail-in, self-administered method, is the low response rate. Compared to a telephone survey or a face-to-face survey, the mail-in written survey has a response rate of just over 20%.

Ability of Respondent to Answer Survey: Another problem with self-administered surveys is three-fold: assumptions about the physical ability, literacy level and language ability of the respondents. Because most surveys pull the participants from a random sampling, it is impossible to control for such variables. Many of those who belong to a survey group have a different primary language than that of the survey. They may also be illiterate or have a low reading level and therefore might not be able to accurately answer the questions. Along those same lines, persons with conditions that cause them to have trouble reading, such as dyslexia, visual impairment or old age, may not have the capabilities necessary to complete the survey.

Imagine that you are interested in finding out how instructors who teach composition in computer classrooms at your university feel about the advantages of teaching in a computer classroom over a traditional classroom. You have a very specific population in mind, and so a mail-out survey would probably not be your best option. You might try an oral survey, but if you are doing this research alone this might be too time consuming. The group administered questionnaire would allow you to get your survey results in one space of time and would ensure a very high response rate (higher than if you stuck a survey into each instructor's mailbox). Your challenge would be to get everyone together. Perhaps your department holds monthly technology support meetings that most of your chosen sample would attend. Your challenge at this point would be to get permission to use part of the weekly meeting time to administer the survey, or to convince the instructors to stay to fill it out after the meeting. Despite the challenges, this type of survey might be the most efficient for your specific purposes.

Strengths and Weaknesses of Group Administered Questionnaires

Rate of Response: This second type of written survey is generally administered to a sample of respondents in a group setting, guaranteeing a high response rate.

Specificity: This type of written survey can be very versatile, allowing for a spectrum of open and closed ended types of questions and can serve a variety of specific purposes, particularly if you are trying to survey a very specific group of people.

Weaknesses of Group Administered Questionnaires

Sampling: This method requires a small sample, and as a result is not the best method for surveys that would benefit from a large sample. This method is only useful in cases that call for very specific information from specific groups.

Scheduling: Since this method requires a group of respondents to answer the survey together, this method requires a slot of time that is convenient for all respondents.

Imagine that you would like to find out about how the dorm dwellers at your university feel about the lack of availability of vegetarian cuisine in their dorm dining halls. You have prepared a questionnaire that requires quite a few long answers, and since you suspect that the students in the dorms may not have the motivation to take the time to respond, you might want a chance to tell them about your research, the benefits that might come from their responses, and to answer their questions about your survey. To ensure the highest response rate, you would probably pick a time of the day when you are sure that the majority of the dorm residents are home, and then work your way from door to door. If you don't have time to interview the number of students you need in your sample, but you don't trust the response rate of mail surveys, the drop-off survey might be the best option for you.

Strengths and Weaknesses of Drop-off Surveys

Convenience: Like the mail survey, the drop-off survey allows the respondents to answer the survey at their own convenience.

Response Rates: The response rates for the drop-off survey are better than the mail survey because it allows the interviewer to make personal contact with the respondent, to explain the importance of the survey, and to answer any questions or concerns the respondent might have.

Time: Because of the personal contact this method requires, this method takes considerably more time than the mail survey.

Sampling: Because of the time it takes to make personal contact with the respondents, the universe of this kind of survey will be considerably smaller than the mail survey pool of respondents.

Response: The response rate for this type of survey, although considerably better than the mail survey, is still not as high as the response rate you will achieve with an oral survey.

Oral surveys are considered more personal forms of survey than the written or electronic methods. Oral surveys are generally used to get thorough opinions and impressions from the respondents.

Oral surveys can be administered in several different ways. For instance, in a group interview, as opposed to a group administered written survey, each respondent is not given an instrument (an individual questionnaire). Instead, the respondents work in groups to answer the questions together while one person takes notes for the whole group. Another more familiar form of oral survey is the phone survey. Phone surveys can be used to get short one word answers (yes/no), as well as longer answers.

Strengths and Weaknesses of Oral Surveys

Personal Contact: Oral surveys conducted either on the telephone or in person give the interviewer the ability to answer questions from the participant. If the participant, for example, does not understand a question or needs further explanation on a particular issue, it is possible to converse with the participant. According to Glastonbury and MacKean, "interviewing offers the flexibility to react to the respondent's situation, probe for more detail, seek more reflective replies and ask questions which are complex or personally intrusive" (p. 228).

Response Rate: Although obtaining a certain number of respondents who are willing to take the time to do an interview is difficult, the researcher has more control over the response rate in oral survey research than with other types of survey research. As opposed to mail surveys where the researcher must wait to see how many respondents actually answer and send back the survey, a researcher using oral surveys can, if the time and money are available, interview respondents until the required sample has been achieved.

Cost: The most obvious disadvantage of face-to-face and telephone survey is the cost. It takes time to collect enough data for a complete survey, and time translates into payroll costs and sometimes payment for the participants.

Bias: Using face-to-face interview for your survey may also introduce bias, from either the interviewer or the interviewee.

Types of Questions Possible: Certain types of questions are not convenient for this type of survey, particularly for phone surveys where the respondent does not have a chance to look at the questionnaire. For instance, if you want to offer the respondent a choice of 5 different answers, it will be very difficult for respondents to remember all of the choices, as well as the question, without a visual reminder. This problem requires the researcher to take special care in constructing questions to be read aloud.

Attitude: Anyone who has ever been interrupted during dinner by a phone interviewer is aware of the negative feelings many people have about answering a phone survey. Upon receiving these calls, many potential respondents will simply hang up.

With the growth of the Internet (and in particular the World Wide Web) and the expanded use of electronic mail for business communication, the electronic survey is becoming a more widely used survey method. Electronic surveys can take many forms. They can be distributed as electronic mail messages sent to potential respondents. They can be posted as World Wide Web forms on the Internet. And they can be distributed via publicly available computers in high-traffic areas such as libraries and shopping malls. In many cases, electronic surveys are placed on laptops and respondents fill out a survey on a laptop computer rather than on paper.

Strengths and Weaknesses of Electronic Surveys

Cost-savings: It is less expensive to send questionnaires online than to pay for postage or for interviewers.

Ease of Editing/Analysis: It is easier to make changes to questionnaire, and to copy and sort data.

Faster Transmission Time: Questionnaires can be delivered to recipients in seconds, rather than in days as with traditional mail.

Easy Use of Preletters: You may send invitations and receive responses in a very short time and thus receive participation level estimates.

Higher Response Rate: Research shows that response rates on private networks are higher with electronic surveys than with paper surveys or interviews.

More Candid Responses: Research shows that respondents may answer more honestly with electronic surveys than with paper surveys or interviews.

Potentially Quicker Response Time with Wider Magnitude of Coverage: Due to the speed of online networks, participants can answer in minutes or hours, and coverage can be global.

Sample Demographic Limitations: Population and sample limited to those with access to computer and online network.

Lower Levels of Confidentiality: Due to the open nature of most online networks, it is difficult to guarantee anonymity and confidentiality.

Layout and Presentation issues: Constructing the format of a computer questionnaire can be more difficult the first few times, due to a researcher's lack of experience.

Additional Orientation/Instructions: More instruction and orientation to the computer online systems may be necessary for respondents to complete the questionnaire.

Potential Technical Problems with Hardware and Software: As most of us (perhaps all of us) know all too well, computers have a much greater likelihood of "glitches" than oral or written forms of communication.

Response Rate: Even though research shows that e-mail response rates are higher, Opermann (1995) warns that most of these studies found response rates higher only during the first few days; thereafter, the rates were not significantly higher.

Designing Surveys

Initial planning of the survey design and survey questions is extremely important in conducting survey research. Once surveying has begun, it is difficult or impossible to adjust the basic research questions under consideration or the tool used to address them since the instrument must remain stable in order to standardize the data set. This section provides information needed to construct an instrument that will satisfy basic validity and reliability issues. It also offers information about the important decisions you need to make concerning the types of questions you are going to use, as well as the content, wording, order and format of your survey questionnaire.

Overall Design Issues

Four key issues should be considered when designing a survey or questionnaire: respondent attitude, the nature of the items (or questions) on the survey, the cost of conducting the survey, and the suitability of the survey to your research questions.

Respondent attitude: When developing your survey instrument, it is important to try to put yourself into your target population's shoes. Think about how you might react when approached by a pollster while out shopping or when receiving a phone call from a pollster while you are sitting down to dinner. Think about how easy it is to throw away a response survey that you've received in the mail. When developing your instrument, it is important to choose the method you think will work for your research, but also one in which you have confidence. Ask yourself what kind of survey you, as a respondent, would be most apt to answer.

Nature of questions: It is important to consider the relationship between the medium that you use and the questions that you ask. For instance, certain types of questions are difficult to answer over the telephone. Think of the problems you would have in attempting to record Likert scale responses, as in closed-ended questions, over the telephone--especially if a scale of more than five points is used. Responses to open-ended questions would also be difficult to record and report in telephone interviews.

Cost: Along with decisions about the nature of the questions you ask, expense issues also enter into your decision making when planning a survey. The population under consideration, the geographic distribution of this sample population, and the type of questionnaire used all affect costs.

Ability of instrument to meet needs of research question: Finally, there needs to be a logical link between your survey instrument and your research questions. If it is important to get a large number of responses from a broad sample of the population, you obviously will not choose to do a drop-off written survey or an in-person oral survey. Because of the size of the needed sample, you will need to choose a survey instrument that meets this need, such as a phone or mail survey. If you are interested in getting thorough information that might need a large amount of interaction between the interviewer and respondent, you will probably pick in-person oral survey with a smaller sample of respondents. Your questions, then, will need to reflect both your research goals and your choice of medium.

Creating Questionnaire Questions

Developing well-crafted questionnaires is more difficult than it might seem. Researchers should carefully consider the type, content, wording, and order of the questions that they include. In this section, we discuss the steps involved in questionnaire development and the advantages and disadvantages of various techniques.

Open-ended vs. Closed-ended Questions

All researchers must make two basic decisions when designing a survey--they must decide: 1) whether they are going to employ an oral, written, or electronic method, and 2) whether they are going to choose questions that are open or close-ended.

Closed-Ended Questions: Closed-ended questions limit respondents' answers to the survey. The participants are allowed to choose from either a pre-existing set of dichotomous answers, such as yes/no, true/false, or multiple choice with an option for "other" to be filled in, or ranking scale response options. The most common of the ranking scale questions is called the Likert scale question. This kind of question asks the respondents to look at a statement (such as "The most important education issue facing our nation in the year 2000 is that all third graders should be able to read") and then "rank" this statement according to the degree to which they agree ("I strongly agree, I somewhat agree, I have no opinion, I somewhat disagree, I strongly disagree").

Open-Ended Questions: Open-ended questions do not give respondents answers to choose from, but rather are phrased so that the respondents are encouraged to explain their answers and reactions to the question with a sentence, a paragraph, or even a page or more, depending on the survey. If you wish to find information on the same topic as asked above (the future of elementary education), but would like to find out what respondents would come up with on their own, you might choose an open-ended question like "What do you think is the most important educational issue facing our nation in the year 2000?" rather than the Likert scale question. Or, if you would like to focus on reading as the topic, but would still not like to limit the participants' responses, you might pose the question this way: "Do you think that the most important issue facing education is literacy? Explain your answer below."

Note: Keep in mind that you do not have to use close-ended or open-ended questions exclusively. Many researchers use a combination of closed and open questions; often researchers use close-ended questions in the beginning of their survey, then allow for more expansive answers once the respondent has some background on the issue and is "warmed-up."

Rating scales: ask respondents to rate something like an idea, concept, individual, program, product, etc. based on a closed ended scale format, usually on a five-point scale. For example, a Likert scale presents respondents with a series of statements rather than questions, and the respondents are asked to which degree they disagree or agree.

Ranking scales: ask respondents to rank a set of ideas or things, etc. For example, a researcher can provide respondents with a list of ice cream flavors, and then ask them to rank these flavors in order of which they like best, with the rank of "one" representing their favorite. These are more difficult to use than rating scales. They will take more time, and they cannot easily be used for phone surveys since they often require visual aids. However, since ranking scales are more difficult, they may actually increase appropriate effort from respondents.

Magnitude estimation scales: ask respondents to provide numeric estimation of answers. For example, respondents might be asked: "Since your least favorite ice cream flavor is vanilla, we'll give it a score of 10. If you like another ice cream 20 times more than vanilla, you'll give it a score of 200, and so on. So, compared to vanilla at a score of ten, how much do you like rocky road?" These scales are obviously very difficult for respondents. However, these scales have been found to help increase variance explanations over ordinal scaling.

Split or unfolding questions: begin by asking respondents a general question, and then follow up with clarifying questions.

Funneling questions: guide respondents through complex issues or concepts by using a series of questions that progressively narrow to a specific question. For example, researchers can start asking general, open-ended questions, and then move to asking specific, closed-ended, forced-choice questions.

Inverted funneling questions: ask respondents a series of questions that move from specific issues to more general issues. For example, researchers can ask respondents specific, closed-ended questions first and then ask more general, open-ended questions. This technique works well when respondents are not expected to be knowledgeable about a content area or when they are not expected to have an articulate opinion regarding an issue.

Factorial questions: use stories or vignettes to study judgment and decision-making processes. For example, a researcher could ask respondents: "You're in a dangerous, rapidly burning building. Do you exit the building immediately or go upstairs to wake up the other inhabitants?" Converse and Presser (1986) warn that little is known about how this survey question technique compares with other techniques.

The wording of survey questions is a tricky endeavor. It is difficult to develop shared meanings or definitions between researchers and the respondents, and among respondents.

In The Practice of Social Research , Keith Crew, a professor of Sociology at the University of Kentucky, cites a famous example of a survey gone awry because of wording problems. An interview survey that included Likert-type questions ranging from "very much" to "very little" was given in a small rural town. Although it would seem that these items would accurately record most respondents' opinions, in the colloquial language of the region the word "very" apparently has an idiomatic usage which is closer to what we mean by "fairly" or even "poorly." You can just imagine what this difference in definition did to the survey results (p. 271).

This, however, is an extreme case. Even small changes in wording can shift the answers of many respondents. The best thing researchers can do to avoid problems with wording is to pretest their questions. However, researchers can also follow some suggestions to help them write more effective survey questions.

To write effective questions, researchers need to keep in mind these four important techniques: directness, simplicity, specificity, and discreteness.

  • Questions should be written in a straightforward, direct language that is not caught up in complex rhetoric or syntax, or in a discipline's slang or lingo. Questions should be specifically tailored for a group of respondents.
  • Questions should be kept short and simple. Respondents should not be expected to learn new, complex information in order to answer questions.
  • Specific questions are for the most part better than general ones. Research shows that the more general a question is the wider the range of interpretation among respondents. To keep specific questions brief, researchers can sometimes use longer introductions that make the context, background, and purpose of the survey clear so that this information is not necessary to include in the actual questions.
  • Avoid questions that are overly personal or direct, especially when dealing with sensitive issues.

When considering the content of your questionnaire, obviously the most important consideration is whether the content of the questions will elicit the kinds of questions necessary to answer your initial research question. You can gauge the appropriateness of your questions by pretesting your survey, but you should also consider the following questions as you are creating your initial questionnaire:

  • Does your choice of open or close-ended questions lead to the types of answers you would like to get from your respondents?
  • Is every question in your survey integral to your intent? Superfluous questions that have already been addressed or are not relevant to your study will waste the time of both the respondents and the researcher.
  • Does one topic warrant more than one question?
  • Do you give enough prior information/context for each set of questions? Sometimes lead-in questions are useful to help the respondent become familiar and comfortable with the topic.
  • Are the questions both general enough (they are both standardized and relevant to your entire sample), and specific enough (avoid vague generalizations and ambiguousness)?
  • Is each question as succinct as it can be without leaving out essential information?
  • Finally, and most importantly, try to put yourself in your respondents' shoes. Write a survey that you would be willing to answer yourself, and be polite, courteous, and sensitive. Thank the responder for participating both at the beginning and the end of the survey.

Order of Questions

Although there are no general rules for ordering survey questions, there are still a few suggestions researchers can follow when setting up a questionnaire.

  • Pretesting can help determine if the ordering of questions is effective.
  • Which topics should start the survey off, and which should wait until the end of the survey?
  • What kind of preparation do my respondents need for each question?
  • Do the questions move logically from one to the next, and do the topics lead up to each other?

The following general guidelines for ordering survey questions can address these questions:

  • Use warm-up questions. Easier questions will ease the respondent into the survey and will set the tone and the topic of the survey.
  • Sensitive questions should not appear at the beginning of the survey. Try to put the responder at ease before addressing uncomfortable issues. You may also prepare the reader for these sensitive questions with some sort of written preface.
  • Consider transition questions that make logical links.
  • Try not to mix topics. Topics can easily be placed into "sets" of questions.
  • Try not to put the most important questions last. Respondents may become bored or tired before they get to the end of the survey.
  • Be careful with contingency questions ("If you answered yes to the previous question . . . etc.").
  • If you are using a combination of open and close-ended questions, try not to start your survey with open-ended questions. Respondents will be more likely to answer the survey if they are allowed the ease of closed-questions first.

Borrowing Questions

Before developing a survey questionnaire, Converse and Presser (1986) recommend that researchers consult published compilations of survey questions, like those published by the National Opinion Research Center and the Gallup Poll. This will not only give you some ideas on how to develop your questionnaire, but you can even borrow questions from surveys that reflect your own research. Since these questions and questionnaires have already been tested and used effectively, you will save both time and effort. However, you will need to take care to only use questions that are relevant to your study, and you will usually have to develop some questions on your own.

Advantages of Closed-Ended Questions

  • Closed-ended questions are more easily analyzed. Every answer can be given a number or value so that a statistical interpretation can be assessed. Closed-ended questions are also better suited for computer analysis. If open-ended questions are analyzed quantitatively, the qualitative information is reduced to coding and answers tend to lose some of their initial meaning. Because of the simplicity of closed-ended questions, this kind of loss is not a problem.
  • Closed-ended questions can be more specific, thus more likely to communicate similar meanings. Because open-ended questions allow respondents to use their own words, it is difficult to compare the meanings of the responses.
  • In large-scale surveys, closed-ended questions take less time from the interviewer, the participant and the researcher, and so is a less expensive survey method. The response rate is higher with surveys that use closed-ended question than with those that use open-ended questions.

Advantages of Open-Ended Questions

  • Open-ended questions allow respondents to include more information, including feelings, attitudes and understanding of the subject. This allows researchers to better access the respondents' true feelings on an issue. Closed-ended questions, because of the simplicity and limit of the answers, may not offer the respondents choices that actually reflect their real feelings. Closed-ended questions also do not allow the respondent to explain that they do not understand the question or do not have an opinion on the issue.
  • Open-ended questions cut down on two types of response error; respondents are not likely to forget the answers they have to choose from if they are given the chance to respond freely, and open-ended questions simply do not allow respondents to disregard reading the questions and just "fill in" the survey with all the same answers (such as filling in the "no" box on every question).
  • Because they allow for obtaining extra information from the respondent, such as demographic information (current employment, age, gender, etc.), surveys that use open-ended questions can be used more readily for secondary analysis by other researchers than can surveys that do not provide contextual information about the survey population.

Potential Problems with Survey Questions

While designing questions for a survey, researchers should to be aware of a few problems and how to avoid them:

"Everyone has an opinion": It is incorrect to assume that each respondent has an opinion regarding every question. Therefore, you might offer a "no opinion" option to avoid this assumption. Filters can also be created. For example, researchers can ask respondents if they have any thoughts on an issue, to which they have the option to say "no."

Agree and disagree statements: according to Converse and Presser (1986), these statements suffer from "acquiescence" or the tendency of respondents to agree despite question content (p.35). Researchers can avoid this problem by using forced-choice questions with these statements.

Response order bias: this occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate. Typically, the respondent chooses the last or first response option. This problem might occur if researchers use long lists and/or rating scales.

Response set: this problem can occur when using a close-ended question format with response options like yes/no or agree/disagree. Sometimes respondents do not consider each question and just answer no or disagree to all questions.

Telescoping: occurs when respondents report that an event took place more recently than it actually did. To avoid this problem, Frey and Mertens (1995) say researchers can use "aided recall"-using a reference point or landmark, or list of events or behaviors (p. 101).

Forward telescoping: occurs when respondents include events that have actually happened before the time frame established. This results in overreporting. According to Converse and Presser (1986), researchers can use "bounded recall" to avoid this problem (p.21). Bounded recall is when researchers interview respondents several months or so after the initial interview to inquire about events that have happened since then. This technique, however, requires more resources. Converse and Presser said that researchers can also just try to narrow the reference points used, which has been shown to reduce this problem too.

Fatigue effect: happens when respondents grow bored or tired during the interview. To avoid this problem, Frey and Mertens (1995) say researchers can use transitions, vary questions and response options, and they can put easy to answer questions at the end of the questionnaire.

Types of Questions to Avoid

  • Double-barreled questions- force respondents to make two decisions in one. For example, a question like: "Do you think women and children should be given the first available flu shots?" does not allow the responder to choose whether women or children should be given the first shots.
  • Double negative questions-for example: "Please tell me whether or not you agree or disagree with this statement. Graduate teaching assistants should not be required to help students outside of class." Respondents may confuse the meaning of the disagree option.
  • Hypothetical questions- are typically too difficult for respondents since they require more scrutiny. For example, "If there were a cure for cancer, would you still support euthanasia?"
  • Ambiguous questions- respondents might not understand the question.
  • Biased questions- For example, "Don't you think that suffering terminal cancer patients should be allowed to be released from their pain?" Researchers should never try to make one response option look more suitable than another.
  • Questions with long lists-these questions may tire respondents or respondents may lose track of the question.

Pretesting the Questionnaire

Ultimately, designing the perfect survey questionnaire is impossible. However, researchers can still create effective surveys. To determine the effectiveness of your survey questionnaire, it is necessary to pretest it before actually using it. Pretesting can help you determine the strengths and weaknesses of your survey concerning question format, wording and order.

There are two types of survey pretests: participating and undeclared .

  • Participating pretests dictate that you tell respondents that the pretest is a practice run; rather than asking the respondents to simply fill out the questionnaire, participating pretests usually involve an interview setting where respondents are asked to explain reactions to question form, wording and order. This kind of pretest will help you determine whether the questionnaire is understandable.
  • When conducting an undeclared pretest , you do not tell respondents that it is a pretest. The survey is given just as you intend to conduct it for real. This type of pretest allows you to check your choice of analysis and the standardization of your survey. According to Converse and Presser (1986), if researchers have the resources to do more than one pretest, it might be best to use a participatory pretest first, then an undeclared test.

General Applications of Pretesting:

Whether or not you use a participating or undeclared pretest, pretesting should ideally also test specifically for question variation, meaning, task difficulty, and respondent interest and attention. Your pretests should also include any questions you borrowed from other similar surveys, even if they have already been pretested, because meaning can be affected by the particular context of your survey. Researchers can also pretest the following: flow, order, skip patterns, timing, and overall respondent well-being.

Pretesting for reliability and validity:

Researchers might also want to pretest the reliability and validity of the survey questions. To be reliable, a survey question must be answered by respondents the same way each time. According to Weisberg et. al (1989), researchers can assess reliability by comparing the answers respondents give in one pretest with answers in another pretest. Then, a survey question's validity is determined by how well it measures the concept(s) it is intended to measure. Both convergent validity and divergent validity can be determined by first comparing answers to another question measuring the same concept, then by measuring this answer to the participant's response to a question that asks for the exact opposite answer.

For instance, you might include questions in your pretest that explicitly test for validity: if a respondent answers "yes" to the question, "Do you think that the next president should be a Republican?" then you might ask "What party do you think you might vote for in the next presidential election?" to check for convergent validity, then "Do you think that you will vote Democrat in the next election?" to check the answer for divergent validity.

Conducting Surveys

Once you have constructed a questionnaire, you'll need to make a plan that outlines how and to whom you will administer it. There are a number of options available in order to find a relevant sample group amongst your survey population. In addition, there are various considerations involved with administering the survey itself.

Administering a Survey

This section attempts to answer the question: "How do I go about getting my questionnaire answered?"

For all types of surveys, some basic practicalities need to be considered before the surveying begins. For instance, you need to find the most convenient time to carry out the data collection (this becomes particularly important in interview surveying and group-administered surveys), how long the data collection is likely to take. Finally, you need to make practical arrangements for administering the survey. Pretesting your survey will help you determine the time it takes to administer, process, and analyze your survey, and will also help you clear out some of the bugs.

Administering Written Surveys

Written surveys can be handled in several different ways. A research worker can deliver the questionnaires to the homes of the sample respondents, explain the study, and then pick the questionnaires up on a later date (or, alternately, ask the respondent to mail the survey back when completed). Another option is mailing questionnaires directly to homes and having researchers pick up and check the questionnaires for completeness in person. This method has proven to have higher response rates than straightforward mail surveys, although it tends to take more time and money to administer.

It is important to put yourself into the role of respondent when deciding how to administer your survey. Most of us have received and thrown away a mail survey, and so it may be useful to think back to the reasons you had for not filling it out and returning it. Here are some ideas for boosting your response rate:

  • Include in each questionnaire a letter of introduction and explanation, and a self-addressed, stamped envelope for returning the questionnaire.
  • Oftentimes, when it fits the study's budget, the envelope might also include a monetary "reward" (usually a dollar to five dollars) as an incentive to fill out the survey.
  • Another method for saving the responder time is to create a self-mailing questionnaire that requires no envelope but folds easily so that the return address appears on the outside. The easier you make the process of completing and returning the survey, the better your survey results will be.
  • Follow up mailings are an important part of administering mail surveys. Nonrespondents can be sent letters of additional encouragement to participate. Even better, a new copy of the survey can be sent to nonresponders. Methodological literature suggests that three follow up letters are adequate, and two to three weeks should be allowed between each mailing.

Administering Oral Surveys

Face-To-Face Surveys

Oftentimes conducting oral surveys requires a staff of interviewers; to control this variable as much as possible, the presentation and preparation of the interviewer is an important consideration.

  • In any face-to-face interview, the appearance of the interviewer is important. Since the success of any survey relies on the interest of the participants to respond to the survey, the interviewer should take care to dress and act in such a way that would not offend the general sample population.
  • Of equal importance is the preparedness of the interviewer. The interviewer should be well acquainted with the questions, and have ample practice administering the survey with mock interviews. If several interviewers will be used, they should be trained as a group to ensure standardization and control. Interviewers also need to carry a letter of identification/authentication to present at in-person surveys.

When actually administering the survey, you need to make decisions about how much of the participants' responses need to be recorded, how much the interviewer will need to "probe" for responses, and how much the interviewer will need to account for context (what is the respondent's age, race, gender, reaction to the study, etc.) If you are administering a close-ended question survey, these may not be considerations. On the other hand, when recording more open-ended responses, the researcher needs to decide beforehand on each of these factors:

  • It depends on the purpose of the study whether the interview should be recorded word for word, or whether the interviewer should record general impressions and opinions. However, for the sake of precision, the former approach is preferred. More information is always better than less when it comes to analyzing the results.
  • Sometimes respondents will respond to a question with an inappropriate answer; this can happen with both open and close-question surveys. Even if you give the participant structured choices like "I agree" or "I disagree," they might respond "I think that is true," which might require the interviewer to probe for an appropriate answer. In an open-question survey, this probing becomes more challenging. The interviewer might come with a set of potential questions if the respondent does not elaborate enough or strays from the subject. The nature of these probes, however, need to be constructed by the researcher rather than ad-libbed by the interviewers, and should be carefully controlled so that they do not lead the respondent to change answers.

Phone Surveys

Phone surveys certainly involve all of the preparedness of the face-to-face surveys, but encounter new problems because of their reputation. It is much easier to hang-up on a phone surveyor than it is to slam the door in someone's face, and so the sheer number of calls needed to complete a survey can be baffling. Computer innovation has tempered this problem a bit by allowing more for quick and random number dialing and the ability for interviewers to type answers programs that automatically set up the data for analysis. Systems like CATI (Computer-assisted survey interview) have made phone surveys a more cost and time effective method, and therefore a popular one, although respondents are getting more and more reluctant to answer phone surveys because of the increase in telemarketing.

Before conducting a survey, you must choose a relevant survey population. And, unless a survey population is very small, it is usually impossible to survey the entire relevant population. Therefore, researchers usually just survey a sample of a population from an actual list of the relevant population, which in turn is called a sampling frame . With a carefully selected sample, researchers can make estimations or generalizations regarding an entire population's opinions, attitudes or beliefs on a particular topic.

Sampling Procedures and Methods

There are two different types of sampling procedures-- probability and nonprobability . Probability sampling methods ensure that there is a possibility for each person in a sample population to be selected, whereas nonprobability methods target specific individuals. Nonprobability sampling methods include the following:

  • Purposive samples: to purposely select individuals to survey.
  • Volunteer subjects: to ask for volunteers to survey.
  • Haphazard sampling: to survey individuals who can be easily reached.
  • Quota sampling: to select individuals based on a set quota. For example, if a census indicates that more than half of the population is female, then the sample will be adjusted accordingly.

Clearly, there can be an inherent bias in nonprobability methods. Therefore, according to Weisberg, Krosnick, and Bowen (1989), it is not surprising that most survey researchers prefer probability sampling methods. Some commonly used probability sampling methods for surveys are:

  • Simple random sample: a sample is drawn randomly from a list of individuals in a population.
  • Systematic selection procedure sample: a variant of a simple random sample in which a random number is chosen to select the first individual and so on from there.
  • Stratified sample: dividing up the population into smaller groups, and randomly sampling from each group.
  • Cluster sample: dividing up a population into smaller groups, and then only sampling from one of the groups. Cluster sampling is " according to Lee, Forthofer, and Lorimer (1989), is considered a more practical approach to surveys because it samples by groups or clusters of elements rather than by individual elements" (p. 12). It also reduces interview costs. However, Weisberg et. al (1989) said accuracy declines when using this sampling method.
  • Multistage sampling: first, sampling a set of geographic areas. Then, sampling a subset of areas within those areas, and so on.

Sampling and Nonsampling Errors

Directly related to sample size are the concepts of sampling and nonsampling errors. According to Fox and Tracy (1986), surveys are subject to both sampling errors and nonsampling errors.

A sampling error arises from the fact that inevitably samples differ from their populations. Therefore, survey sample results should be seen only as estimations. Weisberg et. al. (1989) said sampling errors cannot be calculated for nonprobability samples, but they can be determined for probability samples. First, to determine sample error, look at the sample size. Then, look at the sampling fraction--the percentage of the population that is being surveyed. Thus, the more people surveyed, the smaller the error. This error can also be reduced, according to Fox and Tracy (1986), by increasing the representativeness of the sample.

Then, there are two different kinds of nonsampling error--random and nonrandom errors. Fox and Tracy (1986) said random errors decrease the reliability of measurements. These errors can be reduced through repeated measurements. Nonrandom errors result from a bias in survey data, which is connected to response and nonresponse bias.

Confidence Level and Interval

Any statement of sampling error must contain two essential components: the confidence level and the confidence interval. These two components are used together to express the accuracy of the sample's statistics in terms of the level of confidence that the statistics fall within a specified interval from the true population parameter. For example, a researcher may be "95 percent confident" that the sample statistic (that 50 percent favor candidate X) is within plus or minus 5 percentage points of the population parameter. In other words, the researcher is 95 percent confident that between 45 and 55 percent of the total population favor candidate X.

Lauer and Asher (1988) provide a table that gives the confidence interval limits for percentages based upon sample size (p. 58):

Sample Size and Confidence Interval Limits

(95% confidence intervals based on a population incidence of 50% and a large population relative to sample size.)

Confidence Limits and Sample Size

When selecting a sample size, one can consider that a higher number of individuals surveyed from a target group yields a tighter measurement, a lower number yields a looser range of confidence limits. The confidence limits may need to be corrected if, according to Lauer and Asher (1988), "the sample size starts to approach the population size" or if "the variable under scrutiny is known to have a much [original emphasis] smaller or larger occurrence than 50% in the whole population" (p. 59). For smaller populations, Singleton (1988) said the standard error or confidence interval should be multiplied by a correction factor equal to sqrt(1 - f), where "f" is the sampling fraction, or proportion of the population included in the sample.

Lauer and Asher (1988) give a table of correction factors for confidence limits where sample size is an important part of population size (p. 60) and also a table of correction factors for where the percentage incidence of the parameter in the population is not 50% (p. 61).

Tables for Calculating Confidence Limits vs. Sample Size

Correction Factors for Confidence Limits When Sample Size (n) Is an Important Part of Population Size (N >= 100)

(For n over 70% of N, take all of N)

From Lauer and Asher (1988, p. 60)

Correction Factors for Rare and Common Percentage of Variables

From Lauer and Asher (1988, p. 61)

Analyzing Survey Results

After creating and conducting your survey, you must now process and analyze the results. These steps require strict attention to detail and, in some cases, knowledge of statistics and computer software packages. How you conduct these steps will depend on the scope of your study, your own capabilities, and the audience to whom you wish to direct the work.

Processing the Results

It is clearly important to keep careful records of survey data in order to do effective work. Most researchers recommend using a computer to help sort and organize the data. Additionally, Glastonbury and MacKean point out that once the data has been filtered though the computer, it is possible to do an unlimited amount of analysis (p. 243).

Jolliffe (1986) believes that editing should be the first step to processing this data. He writes, "The obvious reason for this is to ensure that the data analyzed are correct and complete . At the same time, editing can reduce the bias, increase the precision and achieve consistency between the tables [regarding those produced by social science computer software] (p. 100). Of course, editing may not always be necessary, if for example you are doing a qualitative analysis of open-ended questions, or the survey is part of a larger project and gets distributed to other agencies for analysis. However, editing could be as simple as checking the information input into the computer.

All of this information should be used to test for statistical significance. See our guide on Statistics for more on this topic.

Information may be recorded in any number of ways. Charts and graphs are clear, visual ways to record findings in many cases. For instance, in a mail-out survey where response rate is an issue, you might use a response rate graph to make the process easier. The day the surveys are mailed out should be recorded first. Then, every day thereafter, the number of returned questionnaires should be logged on the graph. Be sure to record both the number returned each day, and the cumulative number, or percentage. Also, as each completed questionnaire is returned, each should be opened, scanned and assigned an identification number.

Analyzing the Results

Before actually beginning the survey the researcher should know how they want to analyze the data. As stated in the Processing the Results section, if you are collecting quantifiable data, a code book is needed for interpreting your data and should be established prior to collecting the survey data. This is important because there are many different formulas needed in order to properly analyze the survey research and obtain statistical significance. Since computer programs have made the process of analyzing data vastly easier than it was, it would be sensible to choose this route. Be sure to pick your program before you design your survey - - some programs require the data to be laid out in different ways.

After the survey is conducted and the data collected, the results must be assembled in some useable format that allows comparison within the survey group, between groups, or both. The results could be analyzed in a number of ways. A T-test may be used to determine if scores of two groups differ on a single variable--whether writing ability differs among students in two classrooms, for instance. A matched T-Test could also be applied to determine if scores of the same participants in a study differ under different conditions or over time. An ANOVA could be applied if the study compares multiple groups on one or more variables. Correlation measurements could also be constructed to compare the results of two interacting variables within the data set.

Secondary Analysis

Secondary analysis of survey data is an accepted methodology which applies previously collected survey data to new research questions. This methodology is particularly useful to researchers who do not have the time or money to conduct an extensive survey, but may be looking at questions for which some large survey has already collected relevant data. A number of books and chapters have been written about this methodology, some of which are listed in the annotated bibliography under "Secondary Analysis."

Advantages and Disadvantages of Using Secondary Analysis

  • Considerably cheaper and faster than doing original studies
  • You can benefit from the research from some of the top scholars in your field, which for the most part ensures quality data.
  • If you have limited funds and time, other surveys may have the advantage of samples drawn from larger populations.
  • How much you use previously collected data is flexible; you might only extract a few figures from a table, you might use the data in a subsidiary role in your research, or even in a central role.
  • A network of data archives in which survey data files are collected and distributed is readily available, making research for secondary analysis easily accessible.

Disadvantages

  • Since many surveys deal with national populations, if you are interested in studying a well-defined minority subgroup you will have a difficult time finding relevant data.
  • Secondary analysis can be used in irresponsible ways. If variables aren't exactly those you want, data can be manipulated and transformed in a way that might lessen the validity of the original research.
  • Much research, particularly of large samples, can involve large data files and difficult statistical packages.

Data-entry Packages Available for Survey Data Analysis

SNAP: Offers simple survey analysis, is able to help with the survey from start to finish, including the designing of questions and questionnaires.

SPSS: Statistical package for social sciences; can cope with most kinds of data.

SAS: A flexible general purpose statistical analysis system.

MINITAB: A very easy-to-use and fairly limited general purpose package for "beginners."

STATGRAPHS: General interactive statistical package with good graphics but not very flexible.

Reporting Survey Results

The final stage of the survey is to report your results. There is not an established format for reporting a survey's results. The report may follow a pattern similar to formal experimental write-ups, or the analysis may show up in pitches to advertising agencies--as with Arbitron data--or the analysis may be presented in departmental meetings to aid curriculum arguments. A formal report might contain contextual information, a literature review, a presentation of the research question under investigation, information on survey participants, a section explaining how the survey was conducted, the survey instrument itself, a presentation of the quantified results, and a discussion of the results.

You can choose to graphically represent your data for easier interpretation by others outside your research project. You can use, for example, bar graphs, histograms, frequency polygrams, pie charts and consistency tables.

Commentary on Survey Research

In this section, we present several commentaries on survey research.

Strengths and Weaknesses of Surveys

  • Surveys are relatively inexpensive (especially self-administered surveys).
  • Surveys are useful in describing the characteristics of a large population. No other method of observation can provide this general capability.
  • They can be administered from remote locations using mail, email or telephone.
  • Consequently, very large samples are feasible, making the results statistically significant even when analyzing multiple variables.
  • Many questions can be asked about a given topic giving considerable flexibility to the analysis.
  • There is flexibilty at the creation phase in deciding how the questions will be administered: as face-to-face interviews, by telephone, as group administered written or oral survey, or by electonic means.
  • Standardized questions make measurement more precise by enforcing uniform definitions upon the participants.
  • Standardization ensures that similar data can be collected from groups then interpreted comparatively (between-group study).
  • Usually, high reliability is easy to obtain--by presenting all subjects with a standardized stimulus, observer subjectivity is greatly eliminated.

Weaknesses:

  • A methodology relying on standardization forces the researcher to develop questions general enough to be minimally appropriate for all respondents, possibly missing what is most appropriate to many respondents.
  • Surveys are inflexible in that they require the initial study design (the tool and administration of the tool) to remain unchanged throughout the data collection.
  • The researcher must ensure that a large number of the selected sample will reply.
  • It may be hard for participants to recall information or to tell the truth about a controversial question.
  • As opposed to direct observation, survey research (excluding some interview approaches) can seldom deal with "context."

Reliability and Validity

Surveys tend to be weak on validity and strong on reliability. The artificiality of the survey format puts a strain on validity. Since people's real feelings are hard to grasp in terms of such dichotomies as "agree/disagree," "support/oppose," "like/dislike," etc., these are only approximate indicators of what we have in mind when we create the questions. Reliability, on the other hand, is a clearer matter. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations. Careful wording, format, content, etc. can reduce significantly the subject's own unreliability.

Ethical Considerations of Using Electronic Surveys

Because electronic mail is rapidly becoming such a large part of our communications system, this survey method deserves special attention. In particular, there are four basic ethical issues researchers should consider if they choose to use email surveys.

Sample Representatives: Since researchers who choose to do surveys have an ethical obligation to use population samples that are inclusive of race, gender, educational and income levels, etc., if you choose to utilize e-mail to administer your survey you face some serious problems. Individuals who have access to personal computers, modems and the Internet are not necessarily representative of a population. Therefore, it is suggested that researchers not use an e-mail survey when a more inclusive research method is available. However, if you do choose to do an e-mail survey because of its other advantages, you might consider including as part of your survey write up a reminder of the limitations of sample representativeness when using this method.

Data Analysis: Even though e-mail surveys tend to have greater response rates, researchers still do not necessarily know exactly who has responded. For example, some e-mail accounts are screened by an unintended viewer before they reach the intended viewer. This issue challenges the external validity of the study. According to Goree and Marszalek (1995), because of this challenge, "researchers should avoid using inferential analysis for electronic surveys" (p. 78).

Confidentiality versus Anonymity: An electronic response is never truly anonymous, since researchers know the respondents' e-mail addresses. According to Goree and Marszalek (1995), researchers are ethically required to guard the confidentiality of their respondents and to assure respondents that they will do so.

Responsible Quotation: It is considered acceptable for researchers to correct typographical or grammatical errors before quoting respondents since respondents do not have the ability to edit their responses. According to Goree and Marszalek (1995), researchers are also faced with the problem of "casual language" use common to electronic communication (p. 78). Casual language responses may be difficult to report within the formal language used in journal articles.

Response Rate Issues

Each year, nonresponse and response rates are becoming more and more important issues in survey research. According to Weisberg, Krosnick and Bowen (1989), in the 1950s it was not unusual for survey researchers to obtain response rates of 90 percent. Now, however, people are not as trusting of interviewers and response rates are much lower--typically 70 percent or less. Today, even when survey researchers obtain high response rates, they still have to deal with many potential respondent problems.

Nonresponse Issues

Nonresponse Errors Nonresponse is usually considered a source of bias in a survey, aptly called nonresponse bias . Nonresponse bias is a problem for almost every survey as it arises from the fact that there are usually differences between the ideal sample pool of respondents and the sample that actually responds to a survey. According to Fox and Tracy (1986), "when these differences are related to criterion measures, the results may be misleading or even erroneous" (p. 9). For example, a response rate of only 40 or 50 percent creates problems of bias since the results may reflect an inordinate percentage of a particular demographic portion of the sample. Thus, variance estimates and confidence intervals become greater as the sample size is reduced, and it becomes more difficult to construct confidence limits.

Nonresponse bias usually cannot be avoided and so inevitably negatively affects most survey research by creating errors in a statistical measurement. Researchers must therefore account for nonresponse either during the planning of their survey or during the analysis of their survey results. If you create a larger sample during the planning stage, confidence limits may be based on the actual number of responses themselves.

Household-Level Determinants of Nonresponse

According to Couper and Groves (1996), reductions in nonresponse and its errors should be based on a theory of survey participation. This theory of survey participation argues that a person's decision to participate in a survey generally occurs during the first moments of interaction with an interviewer or the text. According to Couper and Groves, four types of influences affect a potential respondent's decision of whether or not to cooperate in a survey. First, potential respondents are influenced by two factors that the researcher cannot control: by their social environments and by their immediate households. Second, potential respondents are influenced by two factors the researcher can control: the survey design and the interviewer.

To minimize nonresponse, Couper and Groves suggest that researchers manipulate the two factors they can control--the survey design and the interviewer.

Response Issues

Not only do survey researchers have to be concerned about nonresponse rate errors, but they also have to be concerned about the following potential response rate errors:

  • Response bias occurs when respondents deliberately falsify their responses. This error greatly jeopardizes the validity of a survey's measurements.
  • Response order bias occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate.
  • Response set bias occurs when respondents do not consider each question and just answer all the questions with the same response. For example, they answer "disagree" or "no" to all questions.

These response errors can seriously distort a survey's results. Unfortunately, according to Fox and Tracy (1986), response bias is difficult to eliminate; even if the same respondent is questioned repeatedly, he or she may continue to falsify responses. Response order bias and response set errors, however, can be reduced through careful development of the survey questionnaire.

Satisficing

Related to the issue of response errors, especially response order bias and response bias, is the issue of satisficing. According to Krosnick, Narayan, and Smith (1996) satisficing is the notion that certain survey response patterns occur as respondents "shortcut the cognitive processes necessary for generating optimal answers" (p. 29). This theoretical perspective arises from the belief that most respondents are not highly motivated to answer a survey's questions, as reflected in the declining response rates in recent years. Since many people are reluctant to be interviewed, it is presumptuous to assume that respondents will devote a lot of effort to answering a survey.

The theoretical notion of satisficing can be further understood by considering what respondents must do to provide optimal answers. According to Krosnick et. al. (1996), "respondents must carefully interpret the meaning of each question, search their memories extensively for all relevant information, integrate that information carefully into summary judgments, and respond in ways that convey those judgments' meanings as clearly and precisely as possible"(p. 31). Therefore, satisficing occurs when one or more of these cognitive steps is compromised.

Satisficing takes two forms: weak and strong . Weak satisficing occurs when respondents go through all of the cognitive steps necessary to provide optimal answers, but are not as thorough in their cognitive processing. For example, respondents can answer a question with the first response that seems acceptable instead of generating an optimal answer. Strong satisficing, on the other hand, occurs when respondents omit the steps of judgment and retrieval altogether.

Even though they believe that not enough is known yet to offer suggestions on how to increase optimal respondent answers, Krosnick et. al. (1996) argue that satisficing can be reduced by maximizing "respondent motivation" and by "minimizing task difficulty" in the survey questionnaire (p. 43).

Annotated Bibliography

General Survey Information:

Allan, Graham, & Skinner, Chris (eds.) (1991). Handbook for Research Students in the Social Sciences. The Falmer Press: London.

This book is an excellent resource for anyone studying in the social sciences. It is not only well-written, but it is clear and concise with pertinent research information.

Alreck, P. L., & Settle, R. B. (1995 ). The survey research handbook: Guidelines and strategies for conducting a survey (2nd). Burr Ridge, IL: Irwin.

Provides thorough, effective survey research guidelines and strategies for sponsors, information seekers, and researchers. In a very accessible, but comprehensive, format, this handbook includes checklists and guidelists within the text, bringing together all the different techniques and principles, skills and activities to do a "really effective survey."

Babbie, E.R. (1973). Survey research methods . Belmont, CA: Wadsworth.

A comprehensive overview of survey methods. Solid basic textbook on the subject.

Babbie, E.R. (1995). The practice of social research (7th). Belmont, CA: Wadsworth.

The reference of choice for many social science courses. An excellent overview of question construction, sampling, and survey methodology. Includes a fairly detailed critique of an example questionnaire. Also includes a good overview of statistics related to sampling.

Belson, W.A. (1986). Validity in survey research . Brookvield, VT: Gower.

Emphasis on construction of survey instrument to account for validity.

Bourque, Linda B. & Fiedler, Eve P. (1995). How to Conduct Self-Administered and Mail Surveys. Sage Publications: Thousand Oaks.

Contains current information on both self-administered and mail surveys. It is a great resource if you want to design your own survey; there are step-by-step methods for conducting these two types of surveys.

Bradburn, N.M., & Sudman, S. (1979). Improving interview method and questionnaire design . San Francisco: Jossey-Bass Publishers.

A good overview of polling. Includes setting up questionnaires and survey techniques.

Bradburn, N. M., & Sudman, S. (1988). Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass Publishers.

These veteran survey researchers answer questions about survey research that are commonly asked by the general public.

Campbell, Angus, A., &and; Katona, Georgia. (1953). The Sample Survey: A Technique for Social Science Research. In Newcomb, Theodore M. (Ed). Research Methods in the Behavioral Sciences. The Dryden Press: New York. p 14-55.

Includes information on all aspects of social science research. Some chapters in this book are outdated.

Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the standardized questionnaire . Newbury Park, CA: Sage.

A very helpful little publication that addresses the key issues in question construction.

Dillman, D.A. (1978). Mail and telephone surveys: The total design method . New York: John Wiley & Sons.

An overview of conducting telephone surveys.

Frey, James H., & Oishi, Sabine Mertens. (1995). How To Conduct Interviews By Telephone and In Person. Sage Publications: Thousand Oaks.

This book has a step-by-step breakdown of how to conduct and design telephone and in person interview surveys.

Fowler, Floyd J., Jr. (1993). Survey Research Methods (2nd.). Newbury Park, CA: Sage.

An overview of survey research methods.

Fowler, F. J. Jr., & Mangione, T. W. (1990). Standardized survey interviewing: Minimizing interviewer-related error . Newbury Park, CA: Sage.

Another aspect of validity/reliability--interviewer error.

Fox, J. & Tracy, P. (1986). Randomized Response: A Method for Sensitive Surveys . Beverly Hills, CA: Sage.

Authors provide a good discussion of response issues and methods of random response, especially for surveys with sensitive questions.

Frey, J. H. (1989). Survey research by telephone (2nd). Newbury Park, CA: Sage.

General overview to telephone polling.

Glock, Charles (ed.) (1967). Survey Research in the Social Sciences. New York: Russell Sage Foundation.

Although fairly outdated, this collection of essays is useful in illustrating the somewhat different ways in which different disciplines regard and use survey research.

Hoinville, G. & Jowell, R. (1978). Survey research practice . London: Heinemann.

Practical overview of the methods and procedures of survey research, particularly discussing problems which may arise.

Hyman, H. H. (1972). Secondary Analysis of Sample Surveys. New York: John Wiley & Sons.

This source is particularly useful for anyone attempting to do secondary analysis. It offers a comprehensive overview of this research method, and couches it within the broader context of social scientific research.

Hyman, H. H. (1955). Survey design and analysis: Principles, cases, and procedures . Glencoe, IL: Free Press.

According to Babbie, an oldie but goodie--a classic.

Jones, R. (1985). Research methods in the social and behavioral sciences . Sunderland, MA: Sinauer.

General introduction to methodology. Helpful section on survey research, especially the discussion on sampling.

Kalton, G. (1983). Compensating for missing survey data . Ann Arbor, MI: Survey Research Center, Institute for Social Research, the University of Michigan.

Addresses a problem often encountered in survey methodology.

Kish, L. (1965). Survey sampling . New York: John Wiley & Sons.

Classic text on sampling theories and procedures.

Lake, C.C., & Harper, P. C. (1987). Public opinion polling: A handbook for public interest and citizen advocacy groups . Washington, D.C.: Island Press.

Clearly written easy to read and follow guide for planning, conducting and analyzing public surveys. Presents material in a step-by-step fashion, including checklists, potential pitfalls and real-world examples and samples.

Lauer, J.M., & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford UP.

Excellent overview of a number of research methodologies applicable to composition studies. Includes a chapter on "Sampling and Surveys" and appendices on basic statistical methods and considerations.

Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Applied Social Research: Tool for the Human Services (2nd). Fort Worth, TX: Holt.

A good basic general research textbook which also includes sections on minority issues when doing research and the analysis of "available" or secondary data..

Rea, L. M., & Parker, R. A. (1992). Designing and conducting survey research: A comprehensive guide . San Francisco: Jossey-Bass.

Written for the social and behavioral sciences, public administration, and management.

Rossi, P.H., Wright, J.D., & Anderson, A.B. (eds.) (1983). Handbook of survey research . New York: Academic Press.

Handbook of quantitative studies in social relations.

Salant, P., & Dillman, D. A. (1994). How to conduct your own survey . New York: Wiley.,

A how-to book written for the social sciences.

Sayer, Andrew. (1992). Methods In Social Science: A Realist Approach. Routledge: London and New York.

Gives a different perspective on social science research.

Schuldt, Barbara A., & Totter, Jeff W. (1994, Winter). Electronic Mail vs. Mail Survey Response Rates. Marketing Research, 6. 36-39.

An article with specific information for electronic and mail surveys. Mainly a technical resource.

Schuman, H. & Presser, S. (1981). Questions and answers in attitude surveys . New York: Academic Press.

Detailed analysis of research question wording and question order effects on respondents.

Schwartz, N. & Seymour, S. (1996) Answering Questions: Methodology for Determining Cognitive and Communication Processes in Survey Research. San Francisco: Josey-Bass.

Authors provide a summary of the latest research methods used for analyzing interpretive cognitive and communication processes in answering survey questions.

Seymour, S., Bradburn, N. & Schwartz, N. (1996) Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Josey-Bass.

Explores the survey as a "social conversation" to investigate what answers mean in relation to how people understand the world and communicate.

Simon, J. (1969). Basic research methods in social science: The art of empirical investigation. New York: Random .

An excellent discussion of survey analysis. The definitions and descriptions begin from a fairly understandable (simple) starting point, then the discussion unfolds to cover some fairly complex interpretive strategies.

Singleton, R. Jr., et. al. (1988). Approaches to social research . New York: Oxford UP.

Has a very accessible chapter on sampling as well as a chapter on survey research.

Smith, Robert B. (Ed.) (1982). A Handbook of Social Science Methods, Volume 3. Prayer: New York.

There is a series of handbooks, each one with specific topics in social science research. A good technical resource, yet slightly dated.

Sul Lee, E., Forthofer, R.N.,& Lorimor, R.J. (1989). Analyzing complex survey data . Newbury Park, CA: Sage Publications.

Details on the statistical analysis of survey data.

Singer, E., & Presser, S., eds. (1989). Survey research methods: A reader . Chicago: U of Chicago P.

The essays in this volume originally appeared in various issues of Public Opinion Quarterly.

Survey Research Center (1983). Interviewer's manual . Ann Arbor, MI: University of Michigan Press.

Very practical, step-by-step guide to conducting a survey and interview with lots of examples to illustrate the process.

Pearson, R.W., &Borouch, R.F. (Eds.) (1986). Survey Research Design: Towards a Better Understanding of Their Costs and Benefits. Springer-Verag: Berlin.

Explains, in a technical fashion, the financial aspects of research design. Somewhat of a cost-analysis book.

Weissberg, H.F., Krosnick , J.A., & Bowen, B.D. (1989). An introduction to survey research and data analysis . Glenview, IL: Scott Foresman.

A good discussion of basic analysis and statistics, particularly what statistical applications are appropriate for particular kinds of data.

Anderson, B., Puur, A., Silver, B., Soova, H., & Voormann, R. (1994). Use of a lottery as an incentive for survey participation: a pilot survey in Estonia. International Journal of Public Opinion Research, 6 , 64-71.

Looks at return results in a study that offers incentives, and recommends incentive use to increase response rates.

Bare, J. (1994). Truth about daily fluctuations in 1992 pre-election polls. Newspaper Research Journal, 15, 73-81.

Comparison of variations between daily poll results of the major polls used during the 1992 American Presidential race.

Chi, S. (1993). Computer knowledge, interests, attitudes, and uses among faculty in two teachers' universities in China. DAI-A, 54/12 , 4412-4623.

Survey indicating a strong link between subject area and computer usage.

Cowans, J. (1994). Wielding the people: Opinion polls and the problem of legitimacy in France since 1944. DAI-A, 54/12 , 4556-5027.

Study looks at how the advent of opinion polling has affected the legitimacy of French governments since World War II.

Crewe, I. (1993). A nation of liars? Opinion polls and the 1992 election. Journal of the Market Research Society, 35 , 341-359.

Poses possible reasons the British polls were so wrong in predicting the outcomes of the 1992 national elections.

Daly, J., & Miller, M. (1975). The empirical development of an instrument to measure writing apprehension. Research in the teaching of English , 9 (3), 242-249.

Discussion of basics in question development and data analysis. Also includes some sample questions.

Daniell, S. (1993). Graduate teaching assistants' attitudes toward and responses to academic dishonesty. DAI-A,54/06, 2065- 2257.

Study explores the ethical and academic responses to cheating, using a large survey tool.

Mittal, B. (1994). Public assessment of TV advertising: Faint praise and harsh criticism. Journal of Advertising Research, 34, 35-53.

Results of a survey of Southern U.S. television viewers' perceptions of television advertisements.

Palmquist, M., & Young, R.E. (1992). Is writing a gift? The impact on students who believe it is. Reading empirical research studies: The rhetoric of research . Hayes et al. eds. Hillsdale NJ: Erlbaum.

This chapter presents results of a study of student beliefs about writing. Includes sample questions and data analysis.

Serow, R. C., & Bitting, P. F. (1995). National service as educational reform: A survey of student attitudes. Journal of research and development in education , 28 (2), 87-90.

This study assessed college students' attitude toward a national service program.

Stouffer, Samuel. (1955). Communism, Conformity, and Civil Liberties. New York: John Wiley & Sons.

This is a famous old survey worth examining. This survey examined the impact of McCarthyism on the attitudes of both the general public and community leaders, a asking whether the repression of the early 1950s affected support for civil liberties.

Wanta, W. & Hu, Y. (1993). The agenda-setting effects of international news coverage: An examination of differing news frames. International Journal of Public Opinion Research, 5, 250-264.

Discusses results of Gallup polls on important problems in relation to the news coverage of international news.

Worcester, R. (1992). The performance of the political opinion polls in the 1992 British general election. Marketing and Research Today, 20, 256-263.

A critique of the use of polls in an attempt to predict voter actions.

Yamada, S, & Synodinos, N. (1994). Public opinion surveys in Japan. International Journal of Public Opinion Research, 6 , 118-138.

Explores trends in opinion poll usage, response rates, and refusals in Japanese polls from 1975 to 1990.

Criticism/Critique/Evaluation:

Bangura, A. K. (1992). The limitations of survey research methods in assessing the problem of minority student retention in higher education . San Francisco: Mellen Research UP.

Case study done at a Maryland university addressing an aspect of validity involving intercultural factors.

Bateson, N. (1984). Data construction in social surveys. London: Allen & Unwin.

Tackles the theory of the method (but not the methods of the method) of data construction. Deals with validity of the data by validizing the process of data construction.

Braverman, M. (1996). Sources of Survey Error: Implications for Evaluation Studies. New Directions for Evaluation: Advances in Survey Research ,70, 17-28.

Looks at how evaluations using surveys can benefit from using survey design methods that reduce various survey errors.

Brehm, J. (1994). Stubbing our toes for a foot in the door? Prior contact, incentives and survey response. International Journal of Public Opinion Research, 6 , 45-63.

Considers whether incentives or the original contact letter lead to increased response rates.

Bulmer, M. (1977). Social-survey research. In M. Bulmer (ed.), Sociological research methods: An introduction . London: Macmillan.

The section includes discussions of pros and cons of survey research findings, inferences and interpreting relationships found in social-survey analysis.

Couper, M. & Groves, R. (1996). Household-Level Determinants of Survey Nonresponse. . New Directions for Evaluation: Advances in Survey Research , 70, 63-80.

Authors discuss their theory of survey participation. They believe that decisions to participate are based on two occurences: interactions with the interviewer, and the sociodemographic characteristics of respondents.

Couto, R. (1987). Participatory research: Methodology and critique. Clinical Sociology Review, 5 , 83-90.

Criticism of survey research. Addresses knowledge/power/change issues through the critique.

Dillman, D., Sangster, R., Tarnai, J., & Rockwood, T. (1996) Understanding Differences in People's Answers to Telephone and Mail Surveys. New Directions for Evaluation: Advances in Survey Research , 70, 45-62.

Explores the issue of differences in respondents' answers in telephone and mail surveys, which can affect a survey's results.

Esaiasson, P. & Granberg, D. (1993). Hidden negativism: Evaluation of Swedish parties and their leaders under different survey methods. International Journal of Public Opinion Research, 5, 265-277.

Compares varying results of mailed questionnaires vs. telephone and personal interviews. Findings indicate methodology affected results.

Guastello, S. & Rieke, M. (1991). A review and critique of honesty test research. Behavioral Sciences and the Law, 9, 501-523.

Looks at the use of honesty, or integrity, testing to predict theft by employees, questioning further use of the tests due to extremely low validity. Social and legal implications are also considered.

Hamilton, R. (1991). Work and leisure: On the reporting of poll results. Public Opinion Quarterly, 55 , 347-356.

Looks at methodology changes that affected reports of results in the Harris poll on American Leisure.

Juster, F. & Stanford, F. (1991). Comment on work and leisure: On reporting of poll results. Public Opinion Quarterly, 55 , 357-359.

Rebuttal of the Hamilton essay, cited above. The rebuttal is based upon statistical interpretation methods used in the cited survey.

Krosnick, J., Narayan, S., & Smith, W. (1996). Satisficing in Surveys: Initial Evidence. New Directions in Evaluation: Advances in Survey Research , 70, 29-44.

Authors discuss "satisficing," a cognitive approach to survey response, which they believe helps researchers understand how survey respondents arrive at their answers.

Lindsey, J.K. (1973). Inferences from sociological survey data: A unified approach . San Francisco: Jossey-Bass.

Examines the statistical analysis of survey data.

Morgan, F. (1990). Judicial standards for survey research: An update and guidelines. Journal of Marketing, 54 , 59-70.

Looks at legal use of survey information as defined and limited in recent cases. Excellent definitions.

Pottick, K. (1990). Testing the underclass concept by surveying attitudes and behavior. Journal of Sociology and Social Welfare, 17, 117-125.

Review of definitional tests constructed to define "underclass."

Rohme, N. (1992). The state of the art of public opinion polling worldwide. Marketing and Research Today, 20, 264-271.

A quick review of the use of polling in several countries, concluding that the use of polling is on the rise worldwide.

Sabatelli, R. (1988). Measurement issues in marital research: A review and critique of contemporary survey instruments. Journal of Marriage and the Family, 55 , 891-915.

Examines issues of methodology.

Schriesheim, C. A.,& Denisi, A. S. (1980). Item Presentation as an Influence on Questionnaire Validity: A Field Experiment. Educational-and-Psychological-Measurement ; 40 (1), 175-82.

Two types of questionnaire formats measuring leadership variables were examined: one with items measuring the same dimensions grouped together and the second with items measuring the same dimensions distributed randomly. The random condition showed superior validity.

Smith, T. (1990). "A critique of the Kinsey Institute/Roper organization national sex knowledge survey." Public Opinion Quarterly, Vol. 55 , 449-457.

Questions validity of the survey based upon question selection and response interpretations. A rejoinder follows, defending the poll.

Smith, Tom W. (1990). "The First Straw? A Study of the Origins of Election Polls," Public Opinion Quarterly, Vol. 54 (Spring: 21-36).

This article offers a look at the early history of American political polling, with special attention to media reactions to the polls. This is an interesting source for anyone interested in the ethical issues surrounding polling and survey.

Sniderman, P. (1986). Reflections on American racism. Journal of Social Issues, 42 , 173-187.

Rebuttal of critique of racism research. Addresses issues of bias and motive attribution.

Stanfield, J. H. II, & Dennis, R. M., eds (1993). Race and Ethnicity in Research Methods . Newbury Park, CA: Sage.

The contributions in this volume examine the array of methods used in quantitative, qualitative, and comparative and historical research to show how research sensitive to ethnic issues can best be conducted.

Stapel, J. (1993). Public opinion polling: Some perspectives in response to 'critical perspectives.' International Journal of Public Opinion Research, 5, 193-194.

Discussion of the moral power of polling results.

Wentland, E. J., & Smith, K. W. (1993). Survey responses: An evaluation of their validity . San Diego: Academic Press.

Reviews and analyzes data from studies that have, through the use of external criteria, assessed the validity of individuals' responses to questions concerning personal characteristics and behavior in a wide variety of areas.

Williams, R. M., Jr. (1989). "The American Soldier: An Assessment, Several Wars Later." Public Opinion Quarterly. Vol. 53 (Summer: 155-174).

One of the classic studies in the history of survey research is reviewed by one of its authors.

Secondary Analysis:

Jolliffe, F.R. (1986). Survey Design and Analysis. Ellis Horwood Limited: Chichester.

Information about survey design as well as secondary analysis of surveys.

Kiecolt, K. J., & Nathan, L. E. (1985). Secondary analysis of survey data . Beverly Hills, CA: Sage.

Discussion of how to use previously collected survey data to answer a new research question.

Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Analysis of available data. In Applied Social Research: Tool for the Human Services (2nd ed., pp. 202-230). Fort Worth, TX: Holt.

Gives some existing sources for statistical data as well as discussing ways in which to use it.

Rubin, A. (1988). Secondary analyses. In R. M. Grinnell, Jr. (Ed.), Social work research and evaluation. (3rd ed., pp. 323-341). Itasca, IL: Peacock.

Chapter discusses inductive and deductive processes in relation to research designs using secondary data. It also discusses methodological issues and presents a case example.

Dale, A., Arber, S., & Procter, M. (1988). Doing Secondary Analysis . London: Unwin Hyman.

A whole book about how to do secondary analysis.

Electronic Surveys:

Carr, H. H. (1991). Is using computer-based questionnaires better than using paper? Journal of Systems Management September, 19, 37.

Reference from Thach.

Dunnington, Richard A. (1993). New methods and technologies in the organizational survey process. American Behavioral Scientist , 36 (4), 512-30.

Asserts that three decades of technological advancements in communications and computer techhnology have transformed, if not revolutionized, organizational survey use and potential.

Goree, C. & Marszalek, J. (1995). Electronic Surveys: Ethical Issues for Researchers. The College Student Affairs Journal , 15 (1), 75-79.

Explores how the use of electronic surveys challenge existing ethical standards of survey research, and how that researchers need to be aware of these new ethical issues.

Hsu, J. (1995). The Development of Electronic Surveys: A Computer Language-Based Method. The Electronic Library , 13 (3), 195-201.

Discusses the need for a markup language method to properly support the creation of survey questionnaires.

Kiesler, S. & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50 , 402-13.

Opperman, M. (1995) E-Mail Surveys--Potentials and Pitfalls. Marketing Research, 7 (3), 29-33.

A discussion of the advantages and disadvantages of using E-Mail surveys.

Sproull, L. S. (1986). Using electronic mail for data collection in organizational research. Academy of Management Journal, 29, 159-69.

Synodinos, N. E., & Brennan, J. M. (1988). Computer interactive interviewing in survey research. Psychology & Marketing, 5 (2), 117-137.

Thach, Liz. (1995). Using electronic mail to conduct survey research. Educational Technology, 35, 27-31.

A review of the literature on the topic of survey research via electronic mail concentrating on the key issues in design, implementation, and response using this medium.

Walsh, J. P., Kiesler, S., Sproull, L. S., & Hesse, B. W. (1992). Self-selected and randomly selected respondents in a computer network survey. Public Opinion Quarterly, 56, 241-244.

Further Investigation

Bery, David N., & Smith , Kenwyn K. (eds.) (1988). The Self in Social Inquiry: Researching Methods. Sage Publications: Newbury Park.

Has some ethical issues about the role of researcher in social science research.

Barribeau, Paul, Bonnie Butler, Jeff Corney, Megan Doney, Jennifer Gault, Jane Gordon, Randy Fetzer, Allyson Klein, Cathy Ackerson Rogers, Irene F. Stein, Carroll Steiner, Heather Urschel, Theresa Waggoner, & Mike Palmquist. (2005). Survey Research. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=68

Logo for Boise State Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Approaching Survey Research

What is survey research.

Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers. Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be used within experimental research; as long as there is manipulation of an independent variable (e.g. anger vs. fear) to assess an effect on a dependent variable (e.g. risk judgments).

Chapter 5: Learning Objectives

If your research question(s) center on the experience or perception of a particular phenomenon, process, or practice, utilizing a survey method may help glean useful data. After reading this chapter, you will

  • Identify the purpose of survey research
  • Describe the cognitive processes involved in responding to questions
  • Discuss the importance of context in drafting survey items
  • Contrast the utility of open and closed ended questions
  • Describe the BRUSO method of drafting survey questions
  • Describe the format for survey questionnaires

The heart of any survey research project is the survey itself. Although it is easy to think of interesting questions to ask people, constructing a good survey is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing surveys to minimize these unintended effects and thereby maximize the reliability and validity of respondents’ answers.

Cognitive Processes of Responses

To best understand how to write a ‘good’ survey question, it is important to frame the act of responding to a survey question as a cognitive process. That is, there are are involuntary mechanisms that take place when someone is asked a question. Sudman, Bradburn, & Schwarz (1996, as cited in Jhangiani et. al, 2012) illustrate this cognitive process here.

Progression of a cognitive response. Fist the respondent must understand the question then retrieve information from memory to formulate a response based on a judgement formed by the information. The respondent must then edit the response, depending on the response options provided by the survey.

Framing the formulation of survey questions in this way is extremely helpful to ensure that the questions posed on your survey glean accurate information.

Example of a Poorly Worded Survey Question

How many alcoholic drinks do you consume in a typical day?

  • A lot more of average
  • Somewhat more than average
  • Average number
  • Somewhat fewer than average
  • A lot fewer than average

Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both. Even though Chang and Krosnick (2003, as cited in Jhangiani et al. 2012) found that asking about “typical” behavior has been shown to be more valid than asking about “past” behavior, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days). Once respondents have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this mental calculation might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink a lot more than average, they might not want to report that for fear of looking bad in the eyes of the researcher, so instead, they may opt to select the “somewhat more than average” response option.

From this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.

Context Effects on Survey Responses

Again, this complexity can lead to unintended influences on respondents’ answers. These are often referred to as context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990, as cited in Jhangiani et al. 2012). For example, there is an item-order effect when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988, as cited in Jhangiani et al. 2012) . When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.

The response options provided can also have unintended effects on people’s responses (Schwarz, 1999, as cited in Jhangiani et al. 2012) . For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours. To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing or randomizing the order of presentation of the questions in online surveys are good practices for survey questions and can reduce response order effects that show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first!

Writing Survey Items

Types of Items

Questionnaire items can be either open-ended or closed-ended. Open-ended  items simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.

  • “What is the most important thing to teach children to prepare them for life?”
  • “Please describe a time when you were discriminated against because of your age.”
  • “Is there anything else you would like to tell us about?”

Open-ended items are useful when researchers do not know how participants might respond or when they want to avoid influencing their responses. Open-ended items are more qualitative in nature, so they tend to be used when researchers have more vaguely defined research questions—often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analyze because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis. Another disadvantage is that respondents are more likely to skip open-ended items because they take longer to answer. It is best to use open-ended questions when the answer is unsure or for quantities which can easily be converted to categories later in the analysis.

Closed-ended items ask a question and provide a set of response options for participants to choose from.

Examples of  Closed-Ended Questions

How old are you?

On a scale of 0 (no pain at all) to 10 (the worst pain ever experienced), how much pain are you in right now?

Closed-ended items are used when researchers have a good idea of the different responses that participants might make. They are more quantitative in nature, so they are also used when researchers are interested in a well-defined variable or construct such as participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behavior. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed- ended items are much more common.

All closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) to which they belong. For quantitative variables, a rating scale is typically provided. A rating scale is an ordered set of responses that participants must choose from.

Likert Scale indicating scaled responses between 1 and 5 to questions. A selection of 1 indicates strongly disagree and a selection of 5 indicates strongly agree

The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven- point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them relevant choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993, as cited in Jhangiani et al. 2012 ) . Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics.

Writing Effective Items

We can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants’ responses. A rough guideline for writing 9 questionnaire items is provided by the BRUSO model (Peterson, 2000, as cited in Jhangiani et al. 2012 ) . An acronym, BRUSO stands for “brief,” “relevant,” “unambiguous,” “specific,” and “objective.” Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This brevity makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent’s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions. Effective questionnaire items are also unambiguous; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.” Effective questionnaire items are also specific so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed- ended items that are “double barreled .” They ask about two conceptually separate issues but allow only one response.

Example of a “Double Barreled” question

Please rate the extent to which you have been feeling anxious and depressed

Note: The issue in the question itself is that anxiety and depression are two separate items and should likely be separated

Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher’s own opinions or lead participants to answer in a particular way. The best way to know how people interpret the wording of the question is to conduct a pilot test and ask a few people to explain how they interpreted the question. 

A description of the BRUSO methodology of writing questions wherein items are brief, relevant, unambiguous, specific, and objective

For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are mutually exclusive. Exhaustive categories cover all possible responses. Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: Jewish, Hindu, Buddhist, and so on. In many cases, it is not feasible to include every possible category, in which case an ‘Other’ category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.

For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint.

Example of an unbalanced versus balanced rating scale

Unbalanced rating scale measuring perceived likelihood

Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely

Balanced rating scale measuring perceived likelihood

Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely |Extremely Likely

Note, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. However, including middle alternatives on bipolar dimensions can be used to allow people to choose an option that is neither.

Formatting the Survey

Writing effective items is only one part of constructing a survey. For one thing, every survey should have a written or spoken introduction that serves two basic functions (Peterson, 2000, as cited by Jhangiani et al. 2012 ). One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. This means that the researcher has only a moment to capture the attention of the respondent and must make it as easy as possible for the respondent  to participate . Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent. Remember that this involves describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not always used in survey research (when the research is of minimal risk and completion of the survey instrument is often accepted by the IRB as evidence of consent to participate), so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.

Coding your survey responses

Once you’ve closed your survey, you’ll need to identify how to quantify the data you’ve collected. Much of this can be done in ways similar to methods described in the previous two chapters. Although there are several ways by which to do this, here are some general tips:

  • Transfer data : Transfer your data to a program which will allow you to organize and ‘clean’ the data. If you’ve used an online tool to gather data, you should be able to download the survey results into a format appropriate for working the data. If you’ve collected responses by hand, you’ll need to input the data manually.
  • Save: ALWAYS save a copy of your original data. Save changes you make to the data under a different name or version in case you need to refer back to the original data.
  • De-identify: This step will depend on the overall approach that you’ve taken to answer your research question and may not be appropriate for your project.
  • Name the variables: Again, there is no ‘right’ way to do this; however, as you move forward, you will want to be sure you can easily identify what data you are extracting. Many times, when you transfer your data the program will automatically associate data collected with the question asked. It is a good idea to name the variable something associated with the data, rather than the question
  • Code the attributes : Each variable will likely have several different attributes, or      layers. You’ll need to come up with a coding method to distinguish the different responses. As discussed in previous chapters, each attribute should have a numeric code associated so that you can quantify the data and use descriptive and/or inferential statistical methods to either describe or explore relationships within the dataset.

Most online survey tools will download data into a spreadsheet-type program and organize that data in association with the question asked. Naming the variables so that you can easily identify the information will be helpful as you proceed to analysis.

This is relatively simple to accomplish with closed-ended questions. Because                   you’ve ‘forced’ the respondent to pick a concrete answer, you can create a code               that is associated with each answer. In the picture above, respondents were                     asked to identify their region and given a list of geographical regions and in                     structed to pick one. The researcher then created a code for the regions. In this               case, 1= West; 2= Midwest; 3= Northeast; 4= Southeast; and 5= Southwest. If you’re           working to quantify data that is somewhat qualitative in nature (i.e. open ended             questions) the process is a little more complicated. You’ll need to either create                 themes or categories, classify types or similar responses, and then assign codes to         those themes or categories.

6. Create a codebook : This.is.essential. Once you begin to code the data, you will                 have somewhat disconnected yourself from the data by translating the data from         a language that we understand to a language which a computer understands. Af           ter you run your statistical methods, you’ll translate it back to the native language         and share findings. To stay organized and accurate, it is important that you keep a         record of how the data has been translated.

7.  Analyze: Once you have the data inputted, cleaned, and coded, you should be                ready  to analyze your data using either descriptive or inferential methods, depend.      ing on your approach and overarching goal.

Key Takeaways

  • Surveys are a great method to identify information about perceptions and experiences
  • Question items must be carefully crafted to elicit an appropriate response
  • Surveys are often a mixed-methods approach to research
  • Both descriptive and inferential statistical approaches can be applied to the data gleaned through survey responses
  • Surveys utilize both open and closed ended questions; identifying which types of questions will yield specific data will be helpful as you plan your approach to analysis
  • Most surveys will need to include a method of informed consent, and an introduction. The introduction should clearly delineate the purpose of the survey and how the results will be utilized
  • Pilot tests of your survey can save you a lot of time and heartache. Pilot testing helps to catch issues in the development of item, accessibility, and type of information derived prior to initiating the survey on a larger scale
  • Survey data can be analyzed much like other types of data; following a systematic approach to coding will help ensure you get the answers you’re looking for
  • This section is attributed to Research Methods in Psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. ↵
  • The majority of content in these sections can be attributed to Research Methods in Psychology by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. ↵

A mixed methods approach using self-reports of respondents who are sampled using stringent methods

A type of survey question that allows the respondent to insert their own response; typically qualitative in nature

A type of survey question which forces a respondent to select a response; no subjectivity.

Practical Research: A Basic Guide to Planning, Doing, and Writing Copyright © by megankoster. All Rights Reserved.

Share This Book

7 steps when conducting survey research: A beginner-friendly guide

  • May 17, 2022

Steps of survey method: Things to know before conducting survey research

Pay attention to questions, step 2: define the population and sample (who will participate in the survey), are interviews or in-person surveys better than written ones, online surveys are the easiest way to reach a broad audience, mail surveys: control who participates, types of questions: what are the most common questions used in survey research, content, phrasing, and the order of questions, step 5: distribute the survey and gather responses, step 6: analyze the collected data, step 7: create a report based on survey results, last but not least: frequently asked questions, follow the seven steps of survey research with surveyplanet.

Conducting survey research encompasses gaining insight from a diverse group of people by asking questions and analyzing answers. It is the best way to collect information about people’s preferences, beliefs, characteristics, and related information.

The key to a good survey is asking relevant questions that will provide needed information. Surveys can be used one-time or repeatedly.

Wondering how to conduct survey research correctly?

This article will lay out—even if you are a beginner—the seven steps of conducting survey research with guidance on how to successfully carrying it out.

How to conduct survey research in 7 steps

Conducting survey research typically involves several key things to do. Here are the most common seven steps in conducting survey research:

Step 1: Identify research goals and objectives

Step 3: decide on the type of survey method to use, step 4: design and write questions.

These survey method steps provide a general framework for conducting research. But keep in mind that specific details and requirements may vary based on research context and objectives.

To understand the process of conducting a survey, start at the beginning. Conducting a survey consists of several steps, each equally important to the outcome.

Before conducting survey research, here are some resources you might find helpful regarding different methods, such as focus group interviews , survey sampling , and qualitative research methods . Learn why a market research survey is important and how to utilize it for your business research goals.

Finally, it is always a good idea to understand what is the difference between a survey and a questionnaire .

The first of seven steps in conducting survey research is to identify the goal of the research.

This will help with subsequent steps, like finding the right audience and designing appropriate questions. In addition, it will provide insight into what data is most important.

By identifying goals, several questions will be answered: What type of information am I collecting? Is it general or specific? Is it for a particular or broad audience? Research goals will define the answers to these questions and help focus the purpose of the survey and its goal.

An objective is a specific action that helps achieve research goals. Usually, for every goal, there are several objectives.

The answers collected from a survey are only helpful if used properly. Determining goals will provide a better idea of what it is you want to learn and make it easier to design questions. However, setting goals and objectives can be confusing. Ask the following questions:

  • What is the subject or topic of the research? This will clarify feedback that is needed and subjects requiring further input.
  • What do I want to learn? The first step is knowing what precisely needs to be learned about a particular subject.
  • What am I looking to achieve with the collected data? This will help define how the survey will be used to improve, adjudicate, and understand a specific subject.

Uncertain about how to write a good survey question ? We got you covered.

Who is the target audience from which information is being gathered? This is the demographic group that will participate in the survey. To successfully define this group, narrow down a specific population segment that will provide accurate and unbiased information.

Depending on the kind of information required, this group can be broad—for example the population of Florida—or it can be relatively narrow, like consumers of a specific product who are between the ages of 18 and 24.

It is rarely possible to survey the entire population being researched. Instead, a sample population is surveyed. This should represent the subject population as a whole. The number required depends on various factors, mainly the size of the subject population. Therefore, the larger and more representative the sample is the more valid the survey.

Precisely determine what mode of collecting data will be used. The ways to conduct a survey depend on sample size, location, types of questions, and the costs of conducting the research. Not sure how many people you need to survey to be statistically significant!? Use our survey sample size calculator and determine your needed survey size.

Based on the purpose of the research, there are various methods of conducting a survey:

In-person surveys are useful for smaller sample sizes since they allow for the gathering of more detailed information on the survey’s subject. They can be conducted either by phone or in person.

The advantage of interviews is that the interviewer can clarify questions and seek additional information. The main risk with this method is researcher bias or respondent equivocation, though a skilled interviewer is usually able to eliminate these issues.

If the correct steps are followed, conducting an online survey has many advantages, such as cost efficiency and flexibility. In addition, online surveys can reach either a vast audience or a very focused one, depending on your needs.

Online tools are the most effective method of conducting a survey. They can be used by anyone and easily customized for any target group. There are many kinds of online surveys that can be sent via email, hosted on a website, or advertised through social media.

To follow the correct steps for conducting a survey, get help from SurveyPlanet . All you need to do is sign up for an account . Creating perfect surveys will be at your fingertips.

Delivered to the respondents’ email addresses, mail surveys access a large sample group and provide control over who is included in the sample. Though increasingly the most common survey research method, response rates are now relatively low .

To get the best response rate results, read our blogs How to write eye-catching survey emails and What’s the best time to send survey emails ?

Survey questions play a significant role in successful research. Therefore, when deciding what questions to ask—and how to ask them—it is crucial to consider various factors.

Choose between closed-ended and open-ended questions. Closed-ended questions have predefined answer options, while open-ended ones enable respondents to shape an answer in their own words.

Before deciding which to use, get acquainted with the options available. Some common types of research questions include:

  • Demographic questions
  • Multiple-choice questions
  • Rating scale questions
  • Likert scale questions
  • Yes or no questions
  • Ranking questions
  • Image choice questions

To make sure results are reliable, each question in a survey needs to be formulated carefully. Each should be directly relevant to the survey’s purpose and include enough information to be answered accurately.

If using closed-ended questions, make sure the available answers cover all possibilities. In addition, questions should be clear and precise without any vagueness and in the language idiom respondents will understand.

When organizing questions, make sure the order is logical. For example, easy and closed-ended questions encourage respondents to continue—they should be at the beginning of the survey. More difficult and complex questions should come later. Related questions should be clustered together and, if there are several topics covered, then related questions should be grouped.

Surveys can be distributed in person, over the phone, via email, or with an online form.

When creating a survey, first determine the number of responses required and how to access the survey sample. It is essential to monitor the response rate. This is calculated by dividing the number of respondents who answered the survey by the number of people in the sample.

There are various methods of conducting a survey and also different methods of analyzing the data collected. After processing and sorting responses (usually with the help of a computer), clean the data by removing incomplete or inaccurate responses.

Different data analysis methods should be used depending on the type of questions utilized. For example, open-ended questions require a bucketing approach in which labels are added to each response and grouped into categories.

Closed-ended questions need statistical analysis. For interviews, use a qualitative method (like thematic analysis) and for Likert scale questions use analysis tools (mean, median, and mode).

Other practical analyzing methods are cross-tabulation and filtering. Filtering can help in understanding the respondent pool better and be used to organize results so that data analysis is quicker and more accessible.

If using an online survey tool, data will be compiled automatically, so the only thing needed is identifying patterns and trends.

The last of the seven steps in conducting survey research is creating a report. Analyzed data should be translated into units of information that directly correspond to the aims and goals identified before creating the survey.

Depending on the formality of the report, include different kinds of information:

  • Initial aims and goals
  • Methods of creation and distribution
  • How the target audience or sample was selected
  • Methods of analysis
  • The results of the survey
  • Problems encountered and whether they influenced results
  • Conclusion and recommendations
  • What’s the best way to select my survey sample size? One must carefully consider the survey sample size to ensure accurate results. Please read our complete guide to survey sample size and find all the answers.
  • How do I design an effective survey instrument? Try out SurveyPlanet PRO features including compelling survey theme templates.
  • How do I analyze and interpret survey data? Glad you asked! We got you covered. Learn how to analyze survey data and what to do with survey responses by reading our blog.
  • What should I consider in terms of ethical practices in survey research? Exploring ethical considerations related to obtaining informed consent, ensuring privacy, and handling sensitive data might be helpful. Start with learning how to write more inclusive surveys .
  • How do I address common survey challenges and errors? Explore strategies to overcome common issues, such as response bias or question-wording problems .
  • How can I maximize survey response rates? Seeking advice on strategies to encourage higher response rates and minimize non-response bias is a first step. Start by finding out what is a good survey response rate .
  • How can I ensure the validity and reliability of my survey results? Learn about methods to enhance the trustworthiness of survey data .

Now that we’ve gone through the seven steps in survey research and understand how to conduct survey research, why not create your own survey and conduct research that will drive better choices and decisions?

Were these seven steps helpful? Then check out Seven tips for creating an exceptional survey design (with examples) and How to conduct online surveys in seven simple steps as well.

Sign up for a SurveyPlanet account to access pre-made questions and survey themes. And, if you upgrade to a SurveyPlanet Pro account, gain access to many unique tools that will enhance your survey creation and analysis experience.

Photo by Adeolu Eletu on Unsplash

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Grad Med Educ
  • v.4(1); 2012 Mar

Qualitative Research Part II: Participants, Analysis, and Quality Assurance

This is the second of a two-part series on qualitative research. Part 1 in the December 2011 issue of Journal of Graduate Medical Education provided an introduction to the topic and compared characteristics of quantitative and qualitative research, identified common data collection approaches, and briefly described data analysis and quality assessment techniques. Part II describes in more detail specific techniques and methods used to select participants, analyze data, and ensure research quality and rigor.

If you are relatively new to qualitative research, some references you may find especially helpful are provided below. The two texts by Creswell 2008 and 2009 are clear and practical. 1 , 2 In 2008, the British Medical Journal offered a series of short essays on qualitative research; the references provided are easily read and digested. 3 – , 8 For those wishing to pursue qualitative research in more detail, a suggestion is to start with the appropriate chapters in Creswell 2008, 1 and then move to the other texts suggested. 9 – , 11

To summarize the previous editorial, while quantitative research focuses predominantly on the impact of an intervention and generally answers questions like “did it work?” and “what was the outcome?”, qualitative research focuses on understanding the intervention or phenomenon and exploring questions like “why was this effective or not?” and “how is this helpful for learning?” The intent of qualitative research is to contribute to understanding. Hence, the research procedures for selecting participants, analyzing data, and ensuring research rigor differ from those for quantitative research. The following sections address these approaches. table 1 provides a comparative summary of methodological approaches for quantitative and qualitative research.

A Comparison of Qualitative and Quantitative Methodological Approaches

An external file that holds a picture, illustration, etc.
Object name is i1949-8357-4-1-1-t01.jpg

Data collection methods most commonly used in qualitative research are individual or group interviews (including focus groups), observation, and document review. They can be used alone or in combination. While the following sections are written in the context of using interviews or focus groups to collect data, the principles described for sample selection, data analysis, and quality assurance are applicable across qualitative approaches.

Selecting Participants

Quantitative research requires standardization of procedures and random selection of participants to remove the potential influence of external variables and ensure generalizability of results. In contrast, subject selection in qualitative research is purposeful; participants are selected who can best inform the research questions and enhance understanding of the phenomenon under study. 1 , 8 Hence, one of the most important tasks in the study design phase is to identify appropriate participants. Decisions regarding selection are based on the research questions, theoretical perspectives, and evidence informing the study.

The subjects sampled must be able to inform important facets and perspectives related to the phenomenon being studied. For example, in a study looking at a professionalism intervention, representative participants could be considered by role (residents and faculty), perspective (those who approve/disapprove the intervention), experience level (junior and senior residents), and/or diversity (gender, ethnicity, other background).

The second consideration is sample size. Quantitative research requires statistical calculation of sample size a priori to ensure sufficient power to confirm that the outcome can indeed be attributed to the intervention. In qualitative research, however, the sample size is not generally predetermined. The number of participants depends upon the number required to inform fully all important elements of the phenomenon being studied. That is, the sample size is sufficient when additional interviews or focus groups do not result in identification of new concepts, an end point called data saturation . To determine when data saturation occurs, analysis ideally occurs concurrently with data collection in an iterative cycle. This allows the researcher to document the emergence of new themes and also to identify perspectives that may otherwise be overlooked. In the professionalism intervention example, as data are analyzed, the researchers may note that only positive experiences and views are being reported. At this time, a decision could be made to identify and recruit residents who perceived the experience as less positive.

Data Analysis

The purpose of qualitative analysis is to interpret the data and the resulting themes, to facilitate understanding of the phenomenon being studied. It is often confused with content analysis, which is conducted to identify and describe results. 12 In the professionalism intervention example, content analysis of responses might report that residents identified the positive elements of the innovation to be integration with real patient cases, opportunity to hear the views of others, and time to reflect on one's own professionalism. An interpretive analysis, on the other hand, would seek to understand these responses by asking questions such as, “Were there conditions that most frequently elicited these positive responses?” Further interpretive analysis might show that faculty engagement influenced the positive responses, with more positive features being described by residents who had faculty who openly reflected upon their own professionalism or who asked probing questions about the cases. This interpretation can lead to a deeper understanding of the results and to new ideas or theories about relationships and/or about how and why the innovation was or was not effective.

Interpretive analysis is generally seen as being conducted in 3 stages: deconstruction, interpretation, and reconstruction. 11 These stages occur after preparing the data for analysis, ie, after transcription of the interviews or focus groups and verification of the transcripts with the recording.

  • Deconstruction refers to breaking down data into component parts in order to see what is included. It is similar to content analysis mentioned above. It requires reading and rereading interview or focus group transcripts and then breaking down data into categories or codes that describe the content.
  • Interpretation follows deconstruction and refers to making sense of and understanding the coded data. It involves comparing data codes and categories within and across transcripts and across variables deemed important to the study (eg, year of residency, discipline, engagement of faculty). Techniques for interpreting data and findings include discussion and comparison of codes among research team members while purposefully looking for similarities and differences among themes, comparing findings with those of other studies, exploring theories which might explain relationships among themes, and exploring negative results (those that do not confirm the dominant themes) in more detail.
  • Reconstruction refers to recreating or repackaging the prominent codes and themes in a manner that shows the relationships and insights derived in the interpretation phase and that explains them more broadly in light of existing knowledge and theoretical perspectives. Generally one or two central concepts will emerge as central or overarching, and others will appear as subthemes that further contribute to the central concepts. Reconstruction requires contextualizing the findings, ie, positioning and framing them within existing theory, evidence, and practice.

Ensuring Research Quality and Rigor

Within qualitative research, two main strategies promote the rigor and quality of the research: ensuring the quality or “authenticity” of the data and the quality or “trustworthiness” of the analysis. 8 , 12 These are similar in many ways to ensuring validity and reliability, respectively, in quantitative research.

 1. Authenticity of the data refers to the quality of the data and data collection procedures. Elements to consider include:

  • Sampling approach and participant selection to enable the research question to be addressed appropriately (see “Selecting Participants” above) and reduce the potential of having a biased sample.

  •  Data triangulation refers to using multiple data sources to produce a more comprehensive view of the phenomenon being studied, eg, interviewing both residents and faculty and using multiple residency sites and/or disciplines.

  • Using the appropriate method to answer the research questions, considering the nature of the topic being explored, eg, individual interviews rather than focus groups are generally more appropriate for topics of a sensitive nature.

  • Using interview and other guides that are not biased or leading, ie, that do not ask questions in a way that may lead the participant to answer in a particular manner.

  • The researcher's and research team's relationships to the study setting and participants need to be explicit, eg, describe the potential for coercion when a faculty member requests his or her own residents to participate in a study.

  • The researcher's and team members' own biases and beliefs relative to the phenomenon under study must be made explicit, and, when necessary, appropriate steps must be taken to reduce their impact on the quality of data collected, eg, by selecting a neutral “third party” interviewer.

 2. Trustworthiness of the analysis refers to the quality of data analysis. Elements to consider when assessing the quality of analysis include:

  • Analysis process: is this clearly described, eg, the roles of the team members, what was done, timing, and sequencing? Is it clear how the data codes or categories were developed? Does the process reflect best practices, eg, comparison of findings within and among transcripts, and use of memos to record decision points?

  • Procedure for resolving differences in findings and among team members: this needs to be clearly described.

  • Process for addressing the potential influence the researchers' views and beliefs may have upon the analysis.

  • Use of a qualitative software program: if used, how was this used?

In summary, this editorial has addressed 3 components of conducting qualitative research: selecting participants, performing data analysis, and assuring research rigor and quality. See table 2 for the key elements for each of these topics.

Conducting Qualitative Research: Summary of Key Elements

An external file that holds a picture, illustration, etc.
Object name is i1949-8357-4-1-1-t02.jpg

JGME editors look forward to reading medical education papers employing qualitative methods and perspectives. We trust these two editorials may be helpful to potential authors and readers, and we welcome your comments on this subject.

Joan Sargeant, PhD, is Professor in the Division of Medical Education, Dalhousie University, Halifax, Nova Scotia, Canada.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

how to write the research respondents

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

how to write the research respondents

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

how to write the research respondents

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

how to write the research respondents

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Finding Respondents That Fit Your Research and Develop Communications With Them

Finding Respondents That Fit Your Research and Develop Communications With Them

Respondents define how your research will go. If you pick the wrong people or approach very few of them, you risk getting irrelevant outcomes. If you come to hasty conclusions, chances are that you’ll make a bad decision, lose time and money on a feature that no one wants.

You can’t just double check your research results. This won’t rule out mistakes. You need to learn to find respondents and develop a trusting relationship. That’s the only way to hear the truth, not just what you want to hear.

We’ve already shared our ways to find interview respondents and ways to invite them for an interview . Today, we’ll talk more about the preliminary step which is sampling.

Finding respondents

We made a survey to find out the most frequent research challenges before even writing this article 🙂 Finding respondents was our top priority. Of all product teams that we surveyed, 61% experienced difficulties in it.

During interviews, we asked the guys about the greatest challenges they had. Most of the time, it was difficult for them to find respondents that would help with a particular task. We think the opposite way: you should look for people with a goal that you can meet.

Sampling for quantitative research

The term speaks for itself. You may think that the more respondents, the better for surveys or product experiments. Not exactly.

You need to evaluate the universe before defining the required sample. How many people fit the parameters you set?

Let’s say the universe consists of male owners of IT companies. Derive a segment, for example, owners of B2B services, and use it as a sample for your research.

There are sample calculators, like this one for A/B tests or  the one for representative samples .

They are based on statistical theory and tell whether your results will be significant.

Product managers should know the concept of confidence intervals and A/B testing rules. Let’s say you’re A/B testing a landing page. You acquired 500 visitors of which 3.8% converted, and when your traffic grew to 1000 visitors, the conversion rate became 3.2% which equaled your reference value. A good product manager always considers the sample. If your sample gets bigger, your initial outcome may vary

Vitaliy

You can gradually increase the sample. For example, do research, release and improve the feature in several iterations, including MVP development, testing it on loyal users, and then expanding the sample depending on the feature objective.

When launching the new feature, we test it in several iterations on our users. First, we test internally on our teammates to collect feedback and improve the MVP. This usually takes one to two weeks. We use the feedback to improve UX, locate bugs, and fix them. Then, we roll the feature out for 20% of the relevant audience as part of the beta test. We add surveys inside the product to collect their feedback. After that, we release it for all users on our paid plans. Our approach to Enterprise accounts is a bit different. We test features more thoroughly and deliver the functionality at its best

Kate

Sampling for qualitative research

You may think qualitative research is easier because you need fewer respondents. However, you can’t calculate exactly how many of them you need.

In qualitative usability testing, there’s the classical “rule of five” by Jakob Nielsen. In a nutshell, it says that five respondents find 85% of interface flaws . The rule usually works, but if you apply it everywhere, it may affect your outcomes.

If you rarely do qualitative research, engage at least 10 respondents each time. This sample will help you identify more problems per one research iteration.

You need more respondents for customer development interviews — sometimes 20, sometimes 40, or even more. Segment audience and find two or three people with different experiences in each segment.

There’s the “saturation” concept that works for qualitative research. Your sample should be enough both to identify as many experience and perception configurations as possible and not to be too big (it’s counterproductive). If your respondents start repeating what you already heard, you should probably stop looking for new ones.

Questions for respondent qualification

how to write the research respondents

How should questions better be worded?

how to write the research respondents

Checklist for respondent scouting

how to write the research respondents

Experiment, test hypotheses, and enhance your product! We’ll be happy to accompany you along the way.

Elena Teplu

Subscribe to Dashly newsletter

Join the community of 13,000 pros who get expert insights on marketing, support, and sales in a weekly newsletter

  • About About dataSpring Get to know your Asian panel insights provider. Meet the Team We aspire to be visionary, passionate, and relentless drivers of dataSpring values. Careers See our current openings and let’s build great things together! Visit Us Check out our offices in key cities across Asia.
  • Products dataSpring Panels Asian Sample We’re ready to serve your research needs with our expansive coverage in Asia. Panel Sources Our secure data comes from proprietary panels, API integration, and third-party partners. Panel Quality Our industry-leading Quality Check System ensures data validity and valuable insights. Operations Enjoy convenient 24/7 support from our highly-capable Springers who speak your language. Get Our Panel Book Solutions Products Complete your requirements with our reliable platforms. Services We provided solutions for every stage of your projects. Mobile Platform Draw in-the-moment insights from Asian mobile consumers with our all-in-one platform. Check our mobile capabilities
  • Resources Get started Understanding Online Research Panels Mobile Research Essentials Downloadable Media Newsletter Press Releases Resources Blogs Read more on topics about online research solutions to get you started. Eye On Asia Check out the latest trends in Asia and learn valuable local insights. dS Insights Relevant and beneficial market research content, updated regularly. Eye On Asia Podcast Listen to the latest market research news and trends in Asia in dataSpring's monthly podcast. リソース ブログ マーケティングリサーチやアジア諸国の情報について Eye On Asia アジアの最新トレンドをチェックして、現地の貴重な情報を知ることができます。 最新のインサイト情報 アジア地域のパネルについては、毎月実施される自主調査より詳しい情報をご確認いただけます。 Eye On Asia Podcast Eye On Asia Podcastにて、最新のアジアのマーケティングリサーチに関するニュースやトレンドを聞くことができます。
  • Ask dataSpring
  • About dataSpring
  • Meet the Team
  • Asian Sample
  • Panel Sources
  • Panel Quality
  • Mobile Platform
  • Understanding Online Research Panels
  • Mobile Research Essentials
  • Eye On Asia
  • dS Insights
  • Eye On Asia Podcast
  • Downloadable Media
  • Press Releases
  • オンラインパネルについて
  • モバイルリサーチの必要性について
  • Ask dataSpring Contact Us

Relevant and beneficial market research content, updated regularly.

7 Ways To Get Respondents for Your Dissertation Survey

Not getting enough people for your study? Here are tried and tested ways to get respondents for your dissertation survey.

7 Ways To Get Respondents for Your Dissertation Survey

Conducting a survey may probably be the most important part of writing your dissertation, since this is where you can get hard data to support your study. It might be the most challenging part as well, especially when you need to get as many respondents as possible to support your results.

You may have everything down pat, from the objectives to the survey design, but without your respondents, your dissertation survey won’t be as useful as you thought it might be. Here are some tips in order for you to gather enough respondents for your dissertation survey:

We wish you the best of luck in completing your dissertation survey! We hope that you’ll be able to gather enough respondents who will complete your study. If you need assistance in getting an Asian online research panel for your survey, or if you'd like to try our survey demo, feel free to contact us or try our free IR check . If you’d like to know more about online research panels, check out our special page about them .

Find out what online research panels are

Don't forget to share this post!

Related articles.

Let's take a quick look at some studies to check if mobile research surveys in Asia do work. Technology has fast-tracked the development of surveys as market research tools, transitioning from traditi...

A properly designed and optimized mobile research survey will deliver powerful insights from your Asian respondents.

Contact us anytime 24/7! One of our Springers will be in touch with you within 24-48 hours to follow up on your request.

dataSpring Panel Book 2021

  • Tokyo +81 3-5294-5970
  • Shanghai +86 21-5238-7703
  • Seoul +82 2-778-6051
  • Manila +63 2-8899-3862
  • Singapore +65 9001-1137
  • Los Angeles +1 718-404-9260
  • London +44 7724-025-169

© 2024 dataSpring Inc. Know today, Power tomorrow INTAGE GROUP

how to write the research respondents

Published: August 08, 2024

One of the most underrated skills you can have as a marketer is marketing research — which is great news for this unapologetic cyber sleuth.

marketer using marketer research methods to better understand her buyer personas

From brand design and product development to buyer personas and competitive analysis, I’ve researched a number of initiatives in my decade-long marketing career.

And let me tell you: having the right marketing research methods in your toolbox is a must.

Market research is the secret to crafting a strategy that will truly help you accomplish your goals. The good news is there is no shortage of options.

How to Choose a Marketing Research Method

Thanks to the Internet, we have more marketing research (or market research) methods at our fingertips than ever, but they’re not all created equal. Let’s quickly go over how to choose the right one.

how to write the research respondents

Free Market Research Kit

5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research

  • SWOT Analysis Template
  • Survey Template
  • Focus Group Template

Download Free

All fields are required.

You're all set!

Click this link to access this resource at any time.

1. Identify your objective.

What are you researching? Do you need to understand your audience better? How about your competition? Or maybe you want to know more about your customer’s feelings about a specific product.

Before starting your research, take some time to identify precisely what you’re looking for. This could be a goal you want to reach, a problem you need to solve, or a question you need to answer.

For example, an objective may be as foundational as understanding your ideal customer better to create new buyer personas for your marketing agency (pause for flashbacks to my former life).

Or if you’re an organic sode company, it could be trying to learn what flavors people are craving.

2. Determine what type of data and research you need.

Next, determine what data type will best answer the problems or questions you identified. There are primarily two types: qualitative and quantitative. (Sound familiar, right?)

  • Qualitative Data is non-numerical information, like subjective characteristics, opinions, and feelings. It’s pretty open to interpretation and descriptive, but it’s also harder to measure. This type of data can be collected through interviews, observations, and open-ended questions.
  • Quantitative Data , on the other hand, is numerical information, such as quantities, sizes, amounts, or percentages. It’s measurable and usually pretty hard to argue with, coming from a reputable source. It can be derived through surveys, experiments, or statistical analysis.

Understanding the differences between qualitative and quantitative data will help you pinpoint which research methods will yield the desired results.

For instance, thinking of our earlier examples, qualitative data would usually be best suited for buyer personas, while quantitative data is more useful for the soda flavors.

However, truth be told, the two really work together.

Qualitative conclusions are usually drawn from quantitative, numerical data. So, you’ll likely need both to get the complete picture of your subject.

For example, if your quantitative data says 70% of people are Team Black and only 30% are Team Green — Shout out to my fellow House of the Dragon fans — your qualitative data will say people support Black more than Green.

(As they should.)

Primary Research vs Secondary Research

You’ll also want to understand the difference between primary and secondary research.

Primary research involves collecting new, original data directly from the source (say, your target market). In other words, it’s information gathered first-hand that wasn’t found elsewhere.

Some examples include conducting experiments, surveys, interviews, observations, or focus groups.

Meanwhile, secondary research is the analysis and interpretation of existing data collected from others. Think of this like what we used to do for school projects: We would read a book, scour the internet, or pull insights from others to work from.

So, which is better?

Personally, I say any research is good research, but if you have the time and resources, primary research is hard to top. With it, you don’t have to worry about your source's credibility or how relevant it is to your specific objective.

You are in full control and best equipped to get the reliable information you need.

3. Put it all together.

Once you know your objective and what kind of data you want, you’re ready to select your marketing research method.

For instance, let’s say you’re a restaurant trying to see how attendees felt about the Speed Dating event you hosted last week.

You shouldn’t run a field experiment or download a third-party report on speed dating events; those would be useless to you. You need to conduct a survey that allows you to ask pointed questions about the event.

This would yield both qualitative and quantitative data you can use to improve and bring together more love birds next time around.

Best Market Research Methods for 2024

Now that you know what you’re looking for in a marketing research method, let’s dive into the best options.

Note: According to HubSpot’s 2024 State of Marketing report, understanding customers and their needs is one of the biggest challenges facing marketers today. The options we discuss are great consumer research methodologies , but they can also be used for other areas.

Primary Research

1. interviews.

Interviews are a form of primary research where you ask people specific questions about a topic or theme. They typically deliver qualitative information.

I’ve conducted many interviews for marketing purposes, but I’ve also done many for journalistic purposes, like this profile on comedian Zarna Garg . There’s no better way to gather candid, open-ended insights in my book, but that doesn’t mean they’re a cure-all.

What I like: Real-time conversations allow you to ask different questions if you’re not getting the information you need. They also push interviewees to respond quickly, which can result in more authentic answers.

What I dislike: They can be time-consuming and harder to measure (read: get quantitative data) unless you ask pointed yes or no questions.

Best for: Creating buyer personas or getting feedback on customer experience, a product, or content.

2. Focus Groups

Focus groups are similar to conducting interviews but on a larger scale.

In marketing and business, this typically means getting a small group together in a room (or Zoom), asking them questions about various topics you are researching. You record and/or observe their responses to then take action.

They are ideal for collecting long-form, open-ended feedback, and subjective opinions.

One well-known focus group you may remember was run by Domino’s Pizza in 2009 .

After poor ratings and dropping over $100 million in revenue, the brand conducted focus groups with real customers to learn where they could have done better.

It was met with comments like “worst excuse for pizza I’ve ever had” and “the crust tastes like cardboard.” But rather than running from the tough love, it took the hit and completely overhauled its recipes.

The team admitted their missteps and returned to the market with better food and a campaign detailing their “Pizza Turn Around.”

The result? The brand won a ton of praise for its willingness to take feedback, efforts to do right by its consumers, and clever campaign. But, most importantly, revenue for Domino’s rose by 14.3% over the previous year.

The brand continues to conduct focus groups and share real footage from them in its promotion:

What I like: Similar to interviewing, you can dig deeper and pivot as needed due to the real-time nature. They’re personal and detailed.

What I dislike: Once again, they can be time-consuming and make it difficult to get quantitative data. There is also a chance some participants may overshadow others.

Best for: Product research or development

Pro tip: Need help planning your focus group? Our free Market Research Kit includes a handy template to start organizing your thoughts in addition to a SWOT Analysis Template, Survey Template, Focus Group Template, Presentation Template, Five Forces Industry Analysis Template, and an instructional guide for all of them. Download yours here now.

3. Surveys or Polls

Surveys are a form of primary research where individuals are asked a collection of questions. It can take many different forms.

They could be in person, over the phone or video call, by email, via an online form, or even on social media. Questions can be also open-ended or closed to deliver qualitative or quantitative information.

A great example of a close-ended survey is HubSpot’s annual State of Marketing .

In the State of Marketing, HubSpot asks marketing professionals from around the world a series of multiple-choice questions to gather data on the state of the marketing industry and to identify trends.

The survey covers various topics related to marketing strategies, tactics, tools, and challenges that marketers face. It aims to provide benchmarks to help you make informed decisions about your marketing.

It also helps us understand where our customers’ heads are so we can better evolve our products to meet their needs.

Apple is no stranger to surveys, either.

In 2011, the tech giant launched Apple Customer Pulse , which it described as “an online community of Apple product users who provide input on a variety of subjects and issues concerning Apple.”

Screenshot of Apple’s Consumer Pulse Website from 2011.

"For example, we did a large voluntary survey of email subscribers and top readers a few years back."

While these readers gave us a long list of topics, formats, or content types they wanted to see, they sometimes engaged more with content types they didn’t select or favor as much on the surveys when we ran follow-up ‘in the wild’ tests, like A/B testing.”  

Pepsi saw similar results when it ran its iconic field experiment, “The Pepsi Challenge” for the first time in 1975.

The beverage brand set up tables at malls, beaches, and other public locations and ran a blindfolded taste test. Shoppers were given two cups of soda, one containing Pepsi, the other Coca-Cola (Pepsi’s biggest competitor). They were then asked to taste both and report which they preferred.

People overwhelmingly preferred Pepsi, and the brand has repeated the experiment multiple times over the years to the same results.

What I like: It yields qualitative and quantitative data and can make for engaging marketing content, especially in the digital age.

What I dislike: It can be very time-consuming. And, if you’re not careful, there is a high risk for scientific error.

Best for: Product testing and competitive analysis

Pro tip:  " Don’t make critical business decisions off of just one data set," advises Pamela Bump. "Use the survey, competitive intelligence, external data, or even a focus group to give you one layer of ideas or a short-list for improvements or solutions to test. Then gather your own fresh data to test in an experiment or trial and better refine your data-backed strategy."

Secondary Research

8. public domain or third-party research.

While original data is always a plus, there are plenty of external resources you can access online and even at a library when you’re limited on time or resources.

Some reputable resources you can use include:

  • Pew Research Center
  • McKinley Global Institute
  • Relevant Global or Government Organizations (i.e United Nations or NASA)

It’s also smart to turn to reputable organizations that are specific to your industry or field. For instance, if you’re a gardening or landscaping company, you may want to pull statistics from the Environmental Protection Agency (EPA).

If you’re a digital marketing agency, you could look to Google Research or HubSpot Research . (Hey, I know them!)

What I like: You can save time on gathering data and spend more time on analyzing. You can also rest assured the data is from a source you trust.

What I dislike: You may not find data specific to your needs.

Best for: Companies under a time or resource crunch, adding factual support to content

Pro tip: Fellow HubSpotter Iskiev suggests using third-party data to inspire your original research. “Sometimes, I use public third-party data for ideas and inspiration. Once I have written my survey and gotten all my ideas out, I read similar reports from other sources and usually end up with useful additions for my own research.”

9. Buy Research

If the data you need isn’t available publicly and you can’t do your own market research, you can also buy some. There are many reputable analytics companies that offer subscriptions to access their data. Statista is one of my favorites, but there’s also Euromonitor , Mintel , and BCC Research .

What I like: Same as public domain research

What I dislike: You may not find data specific to your needs. It also adds to your expenses.

Best for: Companies under a time or resource crunch or adding factual support to content

Which marketing research method should you use?

You’re not going to like my answer, but “it depends.” The best marketing research method for you will depend on your objective and data needs, but also your budget and timeline.

My advice? Aim for a mix of quantitative and qualitative data. If you can do your own original research, awesome. But if not, don’t beat yourself up. Lean into free or low-cost tools . You could do primary research for qualitative data, then tap public sources for quantitative data. Or perhaps the reverse is best for you.

Whatever your marketing research method mix, take the time to think it through and ensure you’re left with information that will truly help you achieve your goals.

Don't forget to share this post!

Related articles.

SWOT Analysis: How To Do One [With Template & Examples]

SWOT Analysis: How To Do One [With Template & Examples]

28 Tools & Resources for Conducting Market Research

28 Tools & Resources for Conducting Market Research

What is a Competitive Analysis — and How Do You Conduct One?

What is a Competitive Analysis — and How Do You Conduct One?

Market Research: A How-To Guide and Template

Market Research: A How-To Guide and Template

TAM, SAM & SOM: What Do They Mean & How Do You Calculate Them?

TAM, SAM & SOM: What Do They Mean & How Do You Calculate Them?

How to Run a Competitor Analysis [Free Guide]

How to Run a Competitor Analysis [Free Guide]

5 Challenges Marketers Face in Understanding Audiences [New Data + Market Researcher Tips]

5 Challenges Marketers Face in Understanding Audiences [New Data + Market Researcher Tips]

Causal Research: The Complete Guide

Causal Research: The Complete Guide

Total Addressable Market (TAM): What It Is & How You Can Calculate It

Total Addressable Market (TAM): What It Is & How You Can Calculate It

What Is Market Share & How Do You Calculate It?

What Is Market Share & How Do You Calculate It?

Free Guide & Templates to Help Your Market Research

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

how to write the research respondents

How to Write a Research Proposal: (with Examples & Templates)

how to write a research proposal

Table of Contents

Before conducting a study, a research proposal should be created that outlines researchers’ plans and methodology and is submitted to the concerned evaluating organization or person. Creating a research proposal is an important step to ensure that researchers are on track and are moving forward as intended. A research proposal can be defined as a detailed plan or blueprint for the proposed research that you intend to undertake. It provides readers with a snapshot of your project by describing what you will investigate, why it is needed, and how you will conduct the research.  

Your research proposal should aim to explain to the readers why your research is relevant and original, that you understand the context and current scenario in the field, have the appropriate resources to conduct the research, and that the research is feasible given the usual constraints.  

This article will describe in detail the purpose and typical structure of a research proposal , along with examples and templates to help you ace this step in your research journey.  

What is a Research Proposal ?  

A research proposal¹ ,²  can be defined as a formal report that describes your proposed research, its objectives, methodology, implications, and other important details. Research proposals are the framework of your research and are used to obtain approvals or grants to conduct the study from various committees or organizations. Consequently, research proposals should convince readers of your study’s credibility, accuracy, achievability, practicality, and reproducibility.   

With research proposals , researchers usually aim to persuade the readers, funding agencies, educational institutions, and supervisors to approve the proposal. To achieve this, the report should be well structured with the objectives written in clear, understandable language devoid of jargon. A well-organized research proposal conveys to the readers or evaluators that the writer has thought out the research plan meticulously and has the resources to ensure timely completion.  

Purpose of Research Proposals  

A research proposal is a sales pitch and therefore should be detailed enough to convince your readers, who could be supervisors, ethics committees, universities, etc., that what you’re proposing has merit and is feasible . Research proposals can help students discuss their dissertation with their faculty or fulfill course requirements and also help researchers obtain funding. A well-structured proposal instills confidence among readers about your ability to conduct and complete the study as proposed.  

Research proposals can be written for several reasons:³  

  • To describe the importance of research in the specific topic  
  • Address any potential challenges you may encounter  
  • Showcase knowledge in the field and your ability to conduct a study  
  • Apply for a role at a research institute  
  • Convince a research supervisor or university that your research can satisfy the requirements of a degree program  
  • Highlight the importance of your research to organizations that may sponsor your project  
  • Identify implications of your project and how it can benefit the audience  

What Goes in a Research Proposal?    

Research proposals should aim to answer the three basic questions—what, why, and how.  

The What question should be answered by describing the specific subject being researched. It should typically include the objectives, the cohort details, and the location or setting.  

The Why question should be answered by describing the existing scenario of the subject, listing unanswered questions, identifying gaps in the existing research, and describing how your study can address these gaps, along with the implications and significance.  

The How question should be answered by describing the proposed research methodology, data analysis tools expected to be used, and other details to describe your proposed methodology.   

Research Proposal Example  

Here is a research proposal sample template (with examples) from the University of Rochester Medical Center. 4 The sections in all research proposals are essentially the same although different terminology and other specific sections may be used depending on the subject.  

Research Proposal Template

Structure of a Research Proposal  

If you want to know how to make a research proposal impactful, include the following components:¹  

1. Introduction  

This section provides a background of the study, including the research topic, what is already known about it and the gaps, and the significance of the proposed research.  

2. Literature review  

This section contains descriptions of all the previous relevant studies pertaining to the research topic. Every study cited should be described in a few sentences, starting with the general studies to the more specific ones. This section builds on the understanding gained by readers in the Introduction section and supports it by citing relevant prior literature, indicating to readers that you have thoroughly researched your subject.  

3. Objectives  

Once the background and gaps in the research topic have been established, authors must now state the aims of the research clearly. Hypotheses should be mentioned here. This section further helps readers understand what your study’s specific goals are.  

4. Research design and methodology  

Here, authors should clearly describe the methods they intend to use to achieve their proposed objectives. Important components of this section include the population and sample size, data collection and analysis methods and duration, statistical analysis software, measures to avoid bias (randomization, blinding), etc.  

5. Ethical considerations  

This refers to the protection of participants’ rights, such as the right to privacy, right to confidentiality, etc. Researchers need to obtain informed consent and institutional review approval by the required authorities and mention this clearly for transparency.  

6. Budget/funding  

Researchers should prepare their budget and include all expected expenditures. An additional allowance for contingencies such as delays should also be factored in.  

7. Appendices  

This section typically includes information that supports the research proposal and may include informed consent forms, questionnaires, participant information, measurement tools, etc.  

8. Citations  

how to write the research respondents

Important Tips for Writing a Research Proposal  

Writing a research proposal begins much before the actual task of writing. Planning the research proposal structure and content is an important stage, which if done efficiently, can help you seamlessly transition into the writing stage. 3,5  

The Planning Stage  

  • Manage your time efficiently. Plan to have the draft version ready at least two weeks before your deadline and the final version at least two to three days before the deadline.
  • What is the primary objective of your research?  
  • Will your research address any existing gap?  
  • What is the impact of your proposed research?  
  • Do people outside your field find your research applicable in other areas?  
  • If your research is unsuccessful, would there still be other useful research outcomes?  

  The Writing Stage  

  • Create an outline with main section headings that are typically used.  
  • Focus only on writing and getting your points across without worrying about the format of the research proposal , grammar, punctuation, etc. These can be fixed during the subsequent passes. Add details to each section heading you created in the beginning.   
  • Ensure your sentences are concise and use plain language. A research proposal usually contains about 2,000 to 4,000 words or four to seven pages.  
  • Don’t use too many technical terms and abbreviations assuming that the readers would know them. Define the abbreviations and technical terms.  
  • Ensure that the entire content is readable. Avoid using long paragraphs because they affect the continuity in reading. Break them into shorter paragraphs and introduce some white space for readability.  
  • Focus on only the major research issues and cite sources accordingly. Don’t include generic information or their sources in the literature review.  
  • Proofread your final document to ensure there are no grammatical errors so readers can enjoy a seamless, uninterrupted read.  
  • Use academic, scholarly language because it brings formality into a document.  
  • Ensure that your title is created using the keywords in the document and is neither too long and specific nor too short and general.  
  • Cite all sources appropriately to avoid plagiarism.  
  • Make sure that you follow guidelines, if provided. This includes rules as simple as using a specific font or a hyphen or en dash between numerical ranges.  
  • Ensure that you’ve answered all questions requested by the evaluating authority.  

Key Takeaways   

Here’s a summary of the main points about research proposals discussed in the previous sections:  

  • A research proposal is a document that outlines the details of a proposed study and is created by researchers to submit to evaluators who could be research institutions, universities, faculty, etc.  
  • Research proposals are usually about 2,000-4,000 words long, but this depends on the evaluating authority’s guidelines.  
  • A good research proposal ensures that you’ve done your background research and assessed the feasibility of the research.  
  • Research proposals have the following main sections—introduction, literature review, objectives, methodology, ethical considerations, and budget.  

how to write the research respondents

Frequently Asked Questions  

Q1. How is a research proposal evaluated?  

A1. In general, most evaluators, including universities, broadly use the following criteria to evaluate research proposals . 6  

  • Significance —Does the research address any important subject or issue, which may or may not be specific to the evaluator or university?  
  • Content and design —Is the proposed methodology appropriate to answer the research question? Are the objectives clear and well aligned with the proposed methodology?  
  • Sample size and selection —Is the target population or cohort size clearly mentioned? Is the sampling process used to select participants randomized, appropriate, and free of bias?  
  • Timing —Are the proposed data collection dates mentioned clearly? Is the project feasible given the specified resources and timeline?  
  • Data management and dissemination —Who will have access to the data? What is the plan for data analysis?  

Q2. What is the difference between the Introduction and Literature Review sections in a research proposal ?  

A2. The Introduction or Background section in a research proposal sets the context of the study by describing the current scenario of the subject and identifying the gaps and need for the research. A Literature Review, on the other hand, provides references to all prior relevant literature to help corroborate the gaps identified and the research need.  

Q3. How long should a research proposal be?  

A3. Research proposal lengths vary with the evaluating authority like universities or committees and also the subject. Here’s a table that lists the typical research proposal lengths for a few universities.  

     
  Arts programs  1,000-1,500 
University of Birmingham  Law School programs  2,500 
  PhD  2,500 
    2,000 
  Research degrees  2,000-3,500 

Q4. What are the common mistakes to avoid in a research proposal ?  

A4. Here are a few common mistakes that you must avoid while writing a research proposal . 7  

  • No clear objectives: Objectives should be clear, specific, and measurable for the easy understanding among readers.  
  • Incomplete or unconvincing background research: Background research usually includes a review of the current scenario of the particular industry and also a review of the previous literature on the subject. This helps readers understand your reasons for undertaking this research because you identified gaps in the existing research.  
  • Overlooking project feasibility: The project scope and estimates should be realistic considering the resources and time available.   
  • Neglecting the impact and significance of the study: In a research proposal , readers and evaluators look for the implications or significance of your research and how it contributes to the existing research. This information should always be included.  
  • Unstructured format of a research proposal : A well-structured document gives confidence to evaluators that you have read the guidelines carefully and are well organized in your approach, consequently affirming that you will be able to undertake the research as mentioned in your proposal.  
  • Ineffective writing style: The language used should be formal and grammatically correct. If required, editors could be consulted, including AI-based tools such as Paperpal , to refine the research proposal structure and language.  

Thus, a research proposal is an essential document that can help you promote your research and secure funds and grants for conducting your research. Consequently, it should be well written in clear language and include all essential details to convince the evaluators of your ability to conduct the research as proposed.  

This article has described all the important components of a research proposal and has also provided tips to improve your writing style. We hope all these tips will help you write a well-structured research proposal to ensure receipt of grants or any other purpose.  

References  

  • Sudheesh K, Duggappa DR, Nethra SS. How to write a research proposal? Indian J Anaesth. 2016;60(9):631-634. Accessed July 15, 2024. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5037942/  
  • Writing research proposals. Harvard College Office of Undergraduate Research and Fellowships. Harvard University. Accessed July 14, 2024. https://uraf.harvard.edu/apply-opportunities/app-components/essays/research-proposals  
  • What is a research proposal? Plus how to write one. Indeed website. Accessed July 17, 2024. https://www.indeed.com/career-advice/career-development/research-proposal  
  • Research proposal template. University of Rochester Medical Center. Accessed July 16, 2024. https://www.urmc.rochester.edu/MediaLibraries/URMCMedia/pediatrics/research/documents/Research-proposal-Template.pdf  
  • Tips for successful proposal writing. Johns Hopkins University. Accessed July 17, 2024. https://research.jhu.edu/wp-content/uploads/2018/09/Tips-for-Successful-Proposal-Writing.pdf  
  • Formal review of research proposals. Cornell University. Accessed July 18, 2024. https://irp.dpb.cornell.edu/surveys/survey-assessment-review-group/research-proposals  
  • 7 Mistakes you must avoid in your research proposal. Aveksana (via LinkedIn). Accessed July 17, 2024. https://www.linkedin.com/pulse/7-mistakes-you-must-avoid-your-research-proposal-aveksana-cmtwf/  

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

How to write a phd research proposal.

  • What are the Benefits of Generative AI for Academic Writing?
  • How to Avoid Plagiarism When Using Generative AI Tools
  • What is Hedging in Academic Writing?  

How to Write Your Research Paper in APA Format

The future of academia: how ai tools are changing the way we do research, you may also like, dissertation printing and binding | types & comparison , what is a dissertation preface definition and examples , how to write your research paper in apa..., how to choose a dissertation topic, how to write an academic paragraph (step-by-step guide), maintaining academic integrity with paperpal’s generative ai writing..., research funding basics: what should a grant proposal..., how to write an abstract in research papers..., how to write dissertation acknowledgements.

American Psychological Association Logo

Lack of growth opportunities is a big reason why employees leave jobs. Here’s how to change that

By investing in employee growth, companies can reduce costly turnover and increase job satisfaction among employees of all ranks

  • Healthy Workplaces
  • Managing Human Capital

man and woman looking at equipment in factory

APA’s 2024 Work in America survey found that nearly a quarter (23%) of American workers are not satisfied with their opportunities for growth and development at their place of work. What’s worse is that this lack of opportunity for advancement is one of the top reasons why Americans quit their jobs , according to a 2022 survey by the Pew Research Center.

By investing in employee growth, companies can reduce costly turnover and increase job satisfaction among employees at all levels. Here are some key workplace strategies that successfully foster growth:

Quality training and mentoring

“Organizations should talk about the three Es: experience, expertise, and exposure,” said Jeff McHenry, PhD, principal of Seattle-based Rainier Leadership Solutions. An industrial-organizational (I/O) psychologist, McHenry works with companies to create a culture centered on employee growth. “To grow someone’s skills effectively, you need to provide them with assignments that stretch them,” he said. Design projects that involve multiple departments so employees can cross-pollinate their skills and understand the company’s bigger picture.

This management mindset is difficult for leaders who “hoard” their talent, added Rich Cober, PhD, an I/O psychologist and managing vice president at Gartner, a research and advisory firm that helps companies develop and implement human resource strategies. “To create an ecosystem of development—which is often on the experiential side—you have to give great workers the space to work in other areas.”

Pathways for career advancement

To keep top performers, more companies today are developing talent marketplaces—online portals where employees can see current openings, read job descriptions, and understand the organizational hierarchy. These tools allow employees to map out their personal career trajectory, said Tim McGonigle, vice president at the Human Resources Research Organization. What’s more, the tools provide organizational transparency, thus fostering inclusion and diversity.

“In the past, employees may have relied [solely] on a mentor/manager to help navigate their careers,” he said. With a career-path system, employees have accurate, up-to-date information to do it themselves.

Career-pathing tools also benefit employees who don’t aspire to be the CEO someday. “It’s good to think in terms of a career ladder but also think of a career lattice—with lateral moves,” Cober said. Companies “can win by showing employees a path to becoming stronger and well-rounded,” he said. “It’s important in a world where change is constant.”

Relevant, reciprocal feedback

“The holy grail of performance management is for leaders to have really good conversations with their people about how they’re doing,” Cober said. That involves managers giving frequent, honest assessments, but also listening when employees talk about their needs. “The pandemic has created a moment where there’s much more appreciation for the total person as an employee. If you take care of them and their families, they will perform better and be more engaged.”

A holistic approach also considers employees’ psychological well-being, he added. “Mental health used to be taboo, but companies now want an open dialogue about the support people need,” Cober said.

Learning and accomplishment

With an emphasis on learning, companies can create a fluid, flexible workforce. One approach is “upskilling,” internal programs that teach new skills or upgrade existing skills. Notably, upskilled workers are more likely to report career advancement into a good job, experts say.

Separately, offering college-tuition benefits helps employees earn a degree debt-free and accomplish long-term career goals . This benefit is particularly attractive to entry-level workers in fields like fast food, retail, and health care.

Further reading

The Importance of Work in an Age of Uncertainty: The Eroding Work Experience in America Blustein, D. L., Oxford University Press , 2019

Organizational career growth and high-performance work systems: The roles of job crafting and organizational innovation climate Miao, R., et al., Journal of Vocational Behavior , 2023

Why Learning is Essential to Employee Engagement Kitto, K., Glint , 2020

Why Companies Should Pay for Employees to Further Their Education McDonough, T., & Oldham, C.,  Harvard Business Review , 2020  

Lack of Career Development Drives Employee Attrition Morris, S., Gartner , 2018

Recommended Reading

Work in america 2024.

  • U.S. workers adjust to the changing nature of employment
  • Psychological safety in the changing workplace

Related APA publications

Six things psychologists are talking about.

The APA Monitor on Psychology ® sister e-newsletter offers fresh articles on psychology trends, new research, and more.

Welcome! Thank you for subscribing.

You may also like

More From Forbes

4 secret ingredients for everlasting love—by a psychologist.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Want your relationship to stand the test of time? Here’s the research-backed recipe for sustained ... [+] love.

Everlasting love is not something that can simply be spoken into reality; rather, it’s something that builds upon itself through thoughts and actions. This is exemplified by a 2017 study from the Marriage and Family Review , which uncovered the factors that make or break how long love can survive in a relationship.

The authors, Michelle Duda and Raymond Bergner, articulate this in the following way: “To say that ‘John loves Mary’ in the romantic sense of that term is to say not merely that he has certain feelings for her, but that he has a certain kind of relationship to her. This relationship is one in which he has given Mary a certain kind of place, or status, in his world. This place is one of extraordinary honor, value and centrality and is perhaps the ultimate such place that one human being can bestow upon another.”

The following four factors are the key to achieving a relationship like that of John and Mary’s—relationships where love lasts a lifetime.

1. Attending To Your Partner’s Best Interests

Duda and Bergner attribute sustained love firstly to partners’ investment in one another’s well-being. Notably, in long-lasting relationships, this investment should not be a means to an end; rather, it’s described as taking deep interest in the “well-being of the beloved for the beloved’s own sake.”

“Mary is invested in the well-being of John for his own sake and not merely for how his well-being might benefit her,” they explain. “He is not for Mary a mere ‘commodity,’ is not an entity that—like her garage mechanic or her hair dresser—has a place in her world that consists essentially of satisfying her needs and desires.”

NYT ‘Strands’ Hints, Spangram And Answers For Sunday, August 18th

Ufc 305 results: bonus winners, highlights and reaction, ‘inside out 2’ debuts on digital streaming this week.

Instead, Mary views John’s well-being as an extension of her own and of the relationship’s. They both nurture one another’s personal growth not because they’ll be rewarded for it, but because it’s a pleasure and privilege to do so. Both partners feel that neither one of them, nor their love, can thrive if the other doesn’t thrive as an individual first. This concerted commitment to one another’s happiness—free from motive—is the first cornerstone to sustained love.

2. Honoring The Exclusivity Of Your Relationship

The second greatest contributor to sustained love is indisputable exclusivity. Duda and Bergner explain, “Romantic love implies that, for John, Mary is his ‘one and only.’ It implies exclusiveness.” They continue, “It implies that John reserves the kind of relationship that he has with Mary—one combining intimacy, sexuality, commitment, care for her well-being and more—for her and her alone.”

Honoring the special place your other half holds is crucial in ensuring they understand your commitment to them. And to offer this kind of place to anyone else would be a fundamental betrayal—not just of their trust, but of the relationship you’ve built together.

Exclusivity isn’t just about physical fidelity; it’s about emotional loyalty as well. By safeguarding the intimate connection you share with your partner—either through words, actions or both—you reinforce the sanctity of your relationship. In doing so, you reaffirm the secure, cherished and irreplaceable nature of your love, which is essential for its endurance.

3. Maintaining A Sense Of Trust And Intimacy

Intimacy is regarded as the third cornerstone of everlasting love. However, it’s important to discern true intimacy from the vague buzzword it has become. Intimacy is far more than just a familiarity and attachment, or a general sense of closeness. Rather, it’s something that can only be achieved when we open the innermost parts of ourselves up to our partner.

The authors describe true intimacy as when “John gives Mary the central place in his intimate world.” They explain, “It implies that he makes a place in his world for her as his primary confidante and ‘soulmate,’ confiding in her about important personal matters such as his hopes, dreams, triumphs, failures, concerns, insecurities, hurts and genuine disagreements with her—and that he desires in turn that she share such matters with him.”

This is the level of trust and vulnerability that transforms a relationship from merely close to truly intimate. You allow your partner to see you fully—flaws and all—and trust that they will still, and always, choose to stand by your side. When both partners feel safe enough to share their deepest fears, desires and insecurities, they too share an unshakeable sense of unity.

4. Accepting Your Partner For Who They Are

The final cornerstone is complete and total acceptance of who your partner is and what they bring to your relationship. Duda and Bergner explain, “Love implies that Mary does not wish or require John to be other than the person he is—that she is not, as it were, evaluating him with some mental measuring stick and finding him waning as a person in significant and fundamental ways.”

Notably, they explain, “Even though she might object to certain actions, habits and omissions on his part, she does not wish or require him to be a different person.” This is not to turn a blind eye to flaws, or to ignore areas for growth. Instead, it’s the act of embracing your partner’s authentic self—with all their complexities and imperfections.

Sustained love, in this way, means loving your partner not in spite of their quirks and idiosyncrasies, but because of them. When you both feel truly accepted for who you are, without pressure to conform to an idealized version of yourself, you can relax into the relationship. At the end of each day, you can both find comfort in knowing no matter what mistakes were made, you’re coming home to someone who loves you just as deeply because of it.

Is everlasting love in the cards for you and your partner? Take this test for an evidence-based answer: Relationship Satisfaction Scale

Mark Travers

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Research paper
  • How to Write Recommendations in Research | Examples & Tips

How to Write Recommendations in Research | Examples & Tips

Published on September 15, 2022 by Tegan George . Revised on July 18, 2023.

Recommendations in research are a crucial component of your discussion section and the conclusion of your thesis , dissertation , or research paper .

As you conduct your research and analyze the data you collected , perhaps there are ideas or results that don’t quite fit the scope of your research topic. Or, maybe your results suggest that there are further implications of your results or the causal relationships between previously-studied variables than covered in extant research.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What should recommendations look like, building your research recommendation, how should your recommendations be written, recommendation in research example, other interesting articles, frequently asked questions about recommendations.

Recommendations for future research should be:

  • Concrete and specific
  • Supported with a clear rationale
  • Directly connected to your research

Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

Relatedly, when making these recommendations, avoid:

  • Undermining your own work, but rather offer suggestions on how future studies can build upon it
  • Suggesting recommendations actually needed to complete your argument, but rather ensure that your research stands alone on its own merits
  • Using recommendations as a place for self-criticism, but rather as a natural extension point for your work

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

There are many different ways to frame recommendations, but the easiest is perhaps to follow the formula of research question   conclusion  recommendation. Here’s an example.

Conclusion An important condition for controlling many social skills is mastering language. If children have a better command of language, they can express themselves better and are better able to understand their peers. Opportunities to practice social skills are thus dependent on the development of language skills.

As a rule of thumb, try to limit yourself to only the most relevant future recommendations: ones that stem directly from your work. While you can have multiple recommendations for each research conclusion, it is also acceptable to have one recommendation that is connected to more than one conclusion.

These recommendations should be targeted at your audience, specifically toward peers or colleagues in your field that work on similar subjects to your paper or dissertation topic . They can flow directly from any limitations you found while conducting your work, offering concrete and actionable possibilities for how future research can build on anything that your own work was unable to address at the time of your writing.

See below for a full research recommendation example that you can use as a template to write your own.

Recommendation in research example

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

how to write the research respondents

If you want to know more about AI for academic writing, AI tools, or research bias, make sure to check out some of our other articles with explanations and examples or go directly to our tools!

Research bias

  • Survivorship bias
  • Self-serving bias
  • Availability heuristic
  • Halo effect
  • Hindsight bias
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

While it may be tempting to present new arguments or evidence in your thesis or disseration conclusion , especially if you have a particularly striking argument you’d like to finish your analysis with, you shouldn’t. Theses and dissertations follow a more formal structure than this.

All your findings and arguments should be presented in the body of the text (more specifically in the discussion section and results section .) The conclusion is meant to summarize and reflect on the evidence and arguments you have already presented, not introduce new ones.

The conclusion of your thesis or dissertation should include the following:

  • A restatement of your research question
  • A summary of your key arguments and/or results
  • A short discussion of the implications of your research

For a stronger dissertation conclusion , avoid including:

  • Important evidence or analysis that wasn’t mentioned in the discussion section and results section
  • Generic concluding phrases (e.g. “In conclusion …”)
  • Weak statements that undermine your argument (e.g., “There are good points on both sides of this issue.”)

Your conclusion should leave the reader with a strong, decisive impression of your work.

In a thesis or dissertation, the discussion is an in-depth exploration of the results, going into detail about the meaning of your findings and citing relevant sources to put them in context.

The conclusion is more shorter and more general: it concisely answers your main research question and makes recommendations based on your overall findings.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, July 18). How to Write Recommendations in Research | Examples & Tips. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/dissertation/recommendations-in-research/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, how to write a discussion section | tips & examples, how to write a thesis or dissertation conclusion, how to write a results section | tips & examples, what is your plagiarism score.

A grey building with the word Canada on it and two Canadian flags behind a veil of leafy trees.

National poll finds majority of Canadians are opposed to military conscription if war breaks out

how to write the research respondents

Associate Lecturer, School of Humanitarian Studies, Royal Roads University

Disclosure statement

Bryce J. Casavant has received funding from the Social Sciences and Humanities Research Council of Canada (SSHRC) and is a previous fellow. He is a Canadian Forces veteran and served between 2004-2010.

Royal Roads University provides funding as a member of The Conversation CA.

Royal Roads University provides funding as a member of The Conversation CA-FR.

View all partners

As fighting in Europe and the Middle East continues, many countries are being forced to reconsider conscription of citizens.

Recent public dialogues over forced military service have erupted in Germany , the United Kingdom , the United States and Canada — countries that have largely abandoned forced military service.

Other countries like Norway , Latvia, Estonia and Sweden have adopted some form of mandatory military service either on a selective or broad basis, with Sweden adopting gender-neutral conscription . Like other European countries, Ukraine and Russia both have conscription but have struggled with fighting-age men fleeing national borders to avoid fighting.

Soldiers embrace their loved ones

Canadians often forget that our country is effectively a sea-border state with Russia, a factor that is making headlines as Chinese and Russian war planes skirt the edge of Canadian airspace and foreign military flotillas patrol the boundaries of our warming Arctic waters.

Along with British youth opposing military conscription due to what they see as “elitist wars” and Canada’s current status with the U.S. as a defence “freeloader,” a looming question is festering: what is Canada’s appetite for military service in the event of all-out war?

Canadians oppose forced military service

A recent poll I commissioned — conducted by the independent polling firm Research Co. and to be made public soon — found that most Canadians (57 per cent) are either strongly (35 per cent) or moderately (22 per cent) opposed to military conscription in modern times if only men are to serve.

Survey results are based on an online survey conducted from July 24 to July 26 of 1,000 adults in Canada. The data has been statistically weighted according to Canadian census figures for age, gender and region in Canada. The margin of error — which measures sample variability — is plus or minus 3.1 percentage points, 19 times out of 20.

The poll found 67 per cent of Canadians either strongly or moderately oppose only women being conscripted — a practice that would be the first of its kind in the world. Only a handful of nations have women conscripts .

Similarly, if conscription did take place on a gender-neutral basis for both men and women, 50 per cent of Canadians would oppose it. With the exact reverse also holding true, a large percentage of Canadians may potentially support conscription if the right scenario arose.

Approximately 10 per cent of all Canadians are not sure whether they’d support conscription in any of the above scenarios. Only eight per cent of Canadians strongly support only women being conscripted while 18 per cent strongly support an only male conscription. Eighteen per cent of Canadians support conscription if it were gender-neutral.

Women respondents across all categories are the most opposed to military conscription regardless of gender. Male respondents, on the other hand, are more likely to support conscription if it’s gender-neutral.

A bar graph shows key findings of conscription survey.

Survey results across age categories show 18- to 34-year-olds are the most likely to both support and oppose conscription. If only women were conscripted, 39 per cent of 18- to 34-year-olds and 40 per cent of 35- to 54-year-olds would oppose conscription, with similar results for male-only service but an approximate five per cent less opposition across all ages.

Although the majority of respondents oppose conscription, all age categories are more likely to support conscription if it is gender-neutral.

Québec and Atlantic Canada showed the highest level of opposition to conscription in all categories. Those who voted NDP in the last election are substantially more likely to oppose all categories of conscription. Liberal and Conservative voters show only a small percentage of differences in both opposition and support.

Youth losing trust in the social contract

A well-funded, well-trained and well-equipped professional volunteer army — like that described by the historic economist Milton Friedman is likely the most viable solution to Canada’s future defence needs. It’s also the only model that is likely defendable under Canada’s Constitution in regards to individual liberty and security.

A professional volunteer army requires the government of Canada to maintain public trust in our country and a positive image of our military and public service.

If Millennials and Gen Z youth do not see anything worth fighting for in Canada — including basic concepts like democracy and civil liberties — they will oppose calls for service, especially if conscripted by force.

  • Canadian Armed Forces
  • Ukraine invasion 2022

how to write the research respondents

Casual Facilitator: GERRIC Student Programs - Arts, Design and Architecture

how to write the research respondents

Senior Lecturer, Digital Advertising

how to write the research respondents

Service Delivery Fleet Coordinator

how to write the research respondents

Manager, Centre Policy and Translation

how to write the research respondents

Newsletter and Deputy Social Media Producer

IMAGES

  1. Example Of Respondents Of The Study In Quantitative Research

    how to write the research respondents

  2. 😍 What is respondents in research. Respondents Of The Research And

    how to write the research respondents

  3. Distribution of Research Respondents

    how to write the research respondents

  4. The research respondents' information

    how to write the research respondents

  5. Research respondents in thesis

    how to write the research respondents

  6. SOLUTION: Sample letter to research respondents

    how to write the research respondents

COMMENTS

  1. A Comprehensive Guide on the Respondents of the Study

    A few respondents, handpicked for the study, can serve the purpose because they are sure to have adequate knowledge around the concerned area of research. In most cases, 10-12 participants may cut it for the panel discussion. For an interview, the researchers may rely on 5-10 experts. However, as a thumb-rule, the greater, the better.

  2. How to Build a Better Qualitative Research Respondent Guide

    Full Disclosure: Make sure respondents know what they are doing and why they are in the study. Tell them what is expected; what they will be doing, the schedule of events and when things are due. This takes the mystery out of the process and makes them feel sense of responsibility to provide you with focused, and connected responses.

  3. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  4. Describing the participants in a study

    This paper reviews the use of descriptive statistics to describe the participants included in a study. It discusses the practicalities of incorporating statistics in papers for publication in Age and Aging, concisely and in ways that are easy for readers to understand and interpret. older people, descriptive statistics, study participants ...

  5. Writing the Research Environment, Research Respondents ...

    This video contains information about how to write the research environment and respondents of the study sections in a research proposal. Different probabili...

  6. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  7. Guide: Conducting Survey Research

    Conducting Survey Research. Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them. The questionnaire, or survey, can be a written document that is completed by the person ...

  8. Approaching Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts ...

  9. Questionnaire Design

    Revised on June 22, 2023. A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information. Questionnaires are commonly used in market research as well as in the social and health sciences.

  10. How to write a survey introduction

    Some research papers require conducting surveys on a particular topic. Writing a research questionnaire introduction for a research paper is no different than writing one for the previously mentioned purposes. Introduce yourself and the topic to respondents and explain the purpose of the research and the benefit to them for participating.

  11. How to write good survey & poll questions

    7 tips for writing a great survey or poll. Whether you are conducting market research surveys, gathering large amounts of data to analyze, collecting feedback from employees, or running online polls—you can follow these tips to craft a survey, poll, or questionnaire that gets results. 1. Ask closed-ended questions.

  12. 7 Steps In Conducting a Survey Research

    Step 3: Decide on the type of survey method to use. Step 4: Design and write questions. Step 5: Distribute the survey and gather responses. Step 6: Analyze the collected data. Step 7: Create a report based on survey results. These survey method steps provide a general framework for conducting research.

  13. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  14. How to Write an APA Methods Section

    To structure your methods section, you can use the subheadings of "Participants," "Materials," and "Procedures.". These headings are not mandatory—aim to organize your methods section using subheadings that make sense for your specific study. Note that not all of these topics will necessarily be relevant for your study.

  15. Qualitative Research Part II: Participants, Analysis, and Quality

    This is the second of a two-part series on qualitative research. Part 1 in the December 2011 issue of Journal of Graduate Medical Education provided an introduction to the topic and compared characteristics of quantitative and qualitative research, identified common data collection approaches, and briefly described data analysis and quality assessment techniques.

  16. Writing Survey Questions

    We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two forms of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical.

  17. Descriptive research: defining your respondents and ...

    To understand what your research goals should entail, let's take a look at the three main ways organizations use descriptive research today: 1. Defining a characteristic of your respondents. All closed-ended questions aim to better define a characteristic for your respondents. This could include gaining an understanding of traits or behaviors ...

  18. Respondents of the Study

    To get input from respondents, the researcher employed a survey rating scale/questionnaire. The first section of the survey questionnaire deals with claims concerning financial problems. The second section of the survey questionnaire is concerned with remarks concerning the lack of tools, equipment, and facilities.

  19. Finding Respondents That Fit Your Research and Develop ...

    Finding respondents. We made a survey to find out the most frequent research challenges before even writing this article 🙂 Finding respondents was our top priority. Of all product teams that we surveyed, 61% experienced difficulties in it. During interviews, we asked the guys about the greatest challenges they had.

  20. How to Write Research Methodology in 2024: Overview, Tips, and

    Methodology in research is defined as the systematic method to resolve a research problem through data gathering using various techniques, providing an interpretation of data gathered and drawing conclusions about the research data. Essentially, a research methodology is the blueprint of a research or study (Murthy & Bhojanna, 2009, p. 32).

  21. 7 Ways To Get Respondents for Your Dissertation Survey

    7. Use online research panels. One of the most cost-effective ways to get respondents is by using online research panels, which is a sample of persons who have agreed to complete surveys via the internet. There are agencies that offer online research panels, most of which are tailored fit to different requirements, like your dissertation surveys.

  22. How to Write a Research Paper: A Step by Step Writing Guide

    Writing a research paper requires a lot of work, and we're here to help. You might be a native or multilingual English speaker and you might be writing in US, UK, Canadian, or Australian English. But no matter what, you can follow these seven steps and write a world-class research paper with QuillBot. Happy research and writing!

  23. 9 Best Marketing Research Methods to Know Your Buyer Better [+ Examples]

    Personally, I say any research is good research, but if you have the time and resources, primary research is hard to top. With it, you don't have to worry about your source's credibility or how relevant it is to your specific objective. You are in full control and best equipped to get the reliable information you need. 3. Put it all together.

  24. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  25. How to Write a Research Proposal: (with Examples & Templates)

    Before conducting a study, a research proposal should be created that outlines researchers' plans and methodology and is submitted to the concerned evaluating organization or person. Creating a research proposal is an important step to ensure that researchers are on track and are moving forward as intended. A research proposal can be defined as a detailed plan or blueprint for the proposed ...

  26. How to Write a Research Proposal

    A research proposal is a short piece of academic writing that outlines the research a graduate student intends to carry out. It starts by explaining why the research will be helpful or necessary, then describes the steps of the potential research and how the research project would add further knowledge to the field of study.

  27. Lack of growth opportunities is a big reason why employees leave jobs

    APA's 2024 Work in America survey found that nearly a quarter (23%) of American workers are not satisfied with their opportunities for growth and development at their place of work. What's worse is that this lack of opportunity for advancement is one of the top reasons why Americans quit their jobs, according to a 2022 survey by the Pew Research Center.

  28. 4 Secret Ingredients For Everlasting Love, By A Psychologist

    2. Honoring The Exclusivity Of Your Relationship. The second greatest contributor to sustained love is indisputable exclusivity. Duda and Bergner explain, "Romantic love implies that, for John ...

  29. How to Write Recommendations in Research

    Recommendations for future research should be: Concrete and specific. Supported with a clear rationale. Directly connected to your research. Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

  30. National poll finds majority of Canadians are opposed to military

    A recent poll I commissioned — conducted by the independent polling firm Research Co. and to be made public soon — found that most Canadians (57 per cent) are either strongly (35 per cent) or ...